id
stringlengths
14
15
text
stringlengths
27
2.12k
source
stringlengths
49
118
9fbaaf7ee976-0
Chains | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/
9fbaaf7ee976-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsOn this pageChainsUsing an LLM in isolation is fine for simple applications,
https://python.langchain.com/docs/modules/chains/
9fbaaf7ee976-2
but more complex applications require chaining LLMs - either with each other or with other components.LangChain provides the Chain interface for such "chained" applications. We define a Chain very generically as a sequence of calls to components, which can include other chains. The base interface is simple:class Chain(BaseModel, ABC): """Base interface that all chains should implement.""" memory: BaseMemory callbacks: Callbacks def __call__( self, inputs: Any, return_only_outputs: bool = False, callbacks: Callbacks = None, ) -> Dict[str, Any]: ...This idea of composing components together in a chain is simple but powerful. It drastically simplifies and makes more modular the implementation of complex applications, which in turn makes it much easier to debug, maintain, and improve your applications.For more specifics check out:How-to for walkthroughs of different chain featuresFoundational to get acquainted with core building block chainsDocument to learn how to incorporate documents into chainsPopular chains for the most common use casesAdditional to see some of the more advanced chains and integrations that you can use out of the boxWhy do we need chains?​Chains allow us to combine multiple components together to create a single, coherent application. For example, we can create a chain that takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM. We can build more complex chains by combining multiple chains together, or by combining chains with other components.Get started​Using LLMChain​The LLMChain is most basic building block chain. It takes in a prompt template, formats it with the user input and returns the response from an LLM.To use the
https://python.langchain.com/docs/modules/chains/
9fbaaf7ee976-3
prompt template, formats it with the user input and returns the response from an LLM.To use the LLMChain, first create a prompt template.from langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatellm = OpenAI(temperature=0.9)prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?",)We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM.from langchain.chains import LLMChainchain = LLMChain(llm=llm, prompt=prompt)# Run the chain only specifying the input variable.print(chain.run("colorful socks")) Colorful Toes Co.If there are multiple variables, you can input them all at once using a dictionary.prompt = PromptTemplate( input_variables=["company", "product"], template="What is a good name for {company} that makes {product}?",)chain = LLMChain(llm=llm, prompt=prompt)print(chain.run({ 'company': "ABC Startup", 'product': "colorful socks" })) Socktopia Colourful Creations.You can use a chat model in an LLMChain as well:from langchain.chat_models import ChatOpenAIfrom langchain.prompts.chat import ( ChatPromptTemplate, HumanMessagePromptTemplate,)human_message_prompt = HumanMessagePromptTemplate( prompt=PromptTemplate( template="What is a good name for a company that makes {product}?", input_variables=["product"], ) )chat_prompt_template
https://python.langchain.com/docs/modules/chains/
9fbaaf7ee976-4
input_variables=["product"], ) )chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])chat = ChatOpenAI(temperature=0.9)chain = LLMChain(llm=chat, prompt=chat_prompt_template)print(chain.run("colorful socks")) Rainbow Socks Co.PreviousVector store-backed retrieverNextHow toWhy do we need chains?Get startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/
9ed5f798e742-0
Foundational | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalLLMRouterSequentialTransformationDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsFoundationalFoundational📄� LLMAn LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.📄� RouterThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the next chain to use for a given input.📄� SequentialThe next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.📄� TransformationThis notebook showcases using a generic transformation chain.PreviousSerializationNextLLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/foundational/
04f0fede01f7-0
Sequential | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/foundational/sequential_chains
04f0fede01f7-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalLLMRouterSequentialTransformationDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsFoundationalSequentialSequentialThe next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains allow you to connect multiple chains and compose them into pipelines that execute some specific scenario.. There are two types of sequential chains:SimpleSequentialChain: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next.SequentialChain: A more general form of sequential chains, allowing for multiple inputs/outputs.from langchain.llms import OpenAIfrom langchain.chains import LLMChainfrom langchain.prompts import PromptTemplate# This is an LLMChain to write a synopsis given a title of a play.llm = OpenAI(temperature=.7)template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""prompt_template = PromptTemplate(input_variables=["title"], template=template)synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)# This is an LLMChain to write a review of a play given a synopsis.llm = OpenAI(temperature=.7)template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a
https://python.langchain.com/docs/modules/chains/foundational/sequential_chains
04f0fede01f7-2
critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.Play Synopsis:{synopsis}Review from a New York Times play critic of the above play:"""prompt_template = PromptTemplate(input_variables=["synopsis"], template=template)review_chain = LLMChain(llm=llm, prompt=prompt_template)# This is the overall chain where we run these two chains in sequence.from langchain.chains import SimpleSequentialChainoverall_chain = SimpleSequentialChain(chains=[synopsis_chain, review_chain], verbose=True)review = overall_chain.run("Tragedy at sunset on the beach") > Entering new SimpleSequentialChain chain... Tragedy at Sunset on the Beach is a story of a young couple, Jack and Sarah, who are in love and looking forward to their future together. On the night of their anniversary, they decide to take a walk on the beach at sunset. As they are walking, they come across a mysterious figure, who tells them that their love will be tested in the near future. The figure then tells the couple that the sun will soon set, and with it, a tragedy will strike. If Jack and Sarah can stay together and pass the test, they will be granted everlasting love. However, if they fail, their love will be lost forever. The play follows the couple as they struggle to stay together and battle the forces that threaten to tear them apart. Despite the tragedy that awaits them, they remain devoted to one another and fight to keep their love alive. In the end, the couple must decide whether to take a chance on their future together or succumb to the tragedy of the sunset. Tragedy at Sunset on the Beach is an
https://python.langchain.com/docs/modules/chains/foundational/sequential_chains
04f0fede01f7-3
Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles. The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats. The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful. > Finished chain.print(review) Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles. The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats. The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that
https://python.langchain.com/docs/modules/chains/foundational/sequential_chains
04f0fede01f7-4
Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful.Sequential Chain​Of course, not all sequential chains will be as simple as passing a single string as an argument and getting a single string as output for all steps in the chain. In this next example, we will experiment with more complex chains that involve multiple inputs, and where there also multiple final outputs. Of particular importance is how we name the input/output variable names. In the above example we didn't have to think about that because we were just passing the output of one chain directly as input to the next, but here we do have worry about that because we have multiple inputs.# This is an LLMChain to write a synopsis given a title of a play and the era it is set in.llm = OpenAI(temperature=.7)template = """You are a playwright. Given the title of play and the era it is set in, it is your job to write a synopsis for that title.Title: {title}Era: {era}Playwright: This is a synopsis for the above play:"""prompt_template = PromptTemplate(input_variables=["title", "era"], template=template)synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="synopsis")# This is an LLMChain to write a review of a play given a synopsis.llm = OpenAI(temperature=.7)template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.Play Synopsis:{synopsis}Review from a New York Times play critic of the above play:"""prompt_template = PromptTemplate(input_variables=["synopsis"], template=template)review_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="review")# This is the overall chain
https://python.langchain.com/docs/modules/chains/foundational/sequential_chains
04f0fede01f7-5
prompt=prompt_template, output_key="review")# This is the overall chain where we run these two chains in sequence.from langchain.chains import SequentialChainoverall_chain = SequentialChain( chains=[synopsis_chain, review_chain], input_variables=["era", "title"], # Here we return multiple variables output_variables=["synopsis", "review"], verbose=True)overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"}) > Entering new SequentialChain chain... > Finished chain. {'title': 'Tragedy at sunset on the beach', 'era': 'Victorian England', 'synopsis': "\n\nThe play follows the story of John, a young man from a wealthy Victorian family, who dreams of a better life for himself. He soon meets a beautiful young woman named Mary, who shares his dream. The two fall in love and decide to elope and start a new life together.\n\nOn their journey, they make their way to a beach at sunset, where they plan to exchange their vows of love. Unbeknownst to them, their plans are overheard by John's father, who has been tracking them. He follows them to the beach and, in a fit of rage, confronts them. \n\nA physical altercation ensues, and in the struggle, John's father accidentally stabs Mary in the chest with his sword. The two are left in shock and disbelief as Mary dies in John's arms, her last words being a declaration of her love for him.\n\nThe tragedy of the play comes to a head when John, broken and with no hope of a future, chooses to take his own life by jumping off the cliffs into the sea below.
https://python.langchain.com/docs/modules/chains/foundational/sequential_chains
04f0fede01f7-6
of a future, chooses to take his own life by jumping off the cliffs into the sea below. \n\nThe play is a powerful story of love, hope, and loss set against the backdrop of 19th century England.", 'review': "\n\nThe latest production from playwright X is a powerful and heartbreaking story of love and loss set against the backdrop of 19th century England. The play follows John, a young man from a wealthy Victorian family, and Mary, a beautiful young woman with whom he falls in love. The two decide to elope and start a new life together, and the audience is taken on a journey of hope and optimism for the future.\n\nUnfortunately, their dreams are cut short when John's father discovers them and in a fit of rage, fatally stabs Mary. The tragedy of the play is further compounded when John, broken and without hope, takes his own life. The storyline is not only realistic, but also emotionally compelling, drawing the audience in from start to finish.\n\nThe acting was also commendable, with the actors delivering believable and nuanced performances. The playwright and director have successfully crafted a timeless tale of love and loss that will resonate with audiences for years to come. Highly recommended."}Memory in Sequential Chains​Sometimes you may want to pass along some context to use in each step of the chain or in a later part of the chain, but maintaining and chaining together the input/output variables can quickly get messy. Using SimpleMemory is a convenient way to do manage this and clean up your chains.For example, using the previous playwright SequentialChain, lets say you wanted to include some context about date, time and location of the play, and using the generated synopsis and review, create some social media post text. You could add these new context variables as input_variables, or we can add a SimpleMemory to the chain to manage this context:from langchain.chains import SequentialChainfrom
https://python.langchain.com/docs/modules/chains/foundational/sequential_chains
04f0fede01f7-7
a SimpleMemory to the chain to manage this context:from langchain.chains import SequentialChainfrom langchain.memory import SimpleMemoryllm = OpenAI(temperature=.7)template = """You are a social media manager for a theater company. Given the title of play, the era it is set in, the date,time and location, the synopsis of the play, and the review of the play, it is your job to write a social media post for that play.Here is some context about the time and location of the play:Date and Time: {time}Location: {location}Play Synopsis:{synopsis}Review from a New York Times play critic of the above play:{review}Social Media Post:"""prompt_template = PromptTemplate(input_variables=["synopsis", "review", "time", "location"], template=template)social_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="social_post_text")overall_chain = SequentialChain( memory=SimpleMemory(memories={"time": "December 25th, 8pm PST", "location": "Theater in the Park"}), chains=[synopsis_chain, review_chain, social_chain], input_variables=["era", "title"], # Here we return multiple variables output_variables=["social_post_text"], verbose=True)overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"}) > Entering new SequentialChain chain... > Finished chain. {'title': 'Tragedy at sunset on the beach', 'era': 'Victorian England', 'time': 'December 25th, 8pm PST', 'location': 'Theater in the Park',
https://python.langchain.com/docs/modules/chains/foundational/sequential_chains
04f0fede01f7-8
PST', 'location': 'Theater in the Park', 'social_post_text': "\nSpend your Christmas night with us at Theater in the Park and experience the heartbreaking story of love and loss that is 'A Walk on the Beach'. Set in Victorian England, this romantic tragedy follows the story of Frances and Edward, a young couple whose love is tragically cut short. Don't miss this emotional and thought-provoking production that is sure to leave you in tears. #AWalkOnTheBeach #LoveAndLoss #TheaterInThePark #VictorianEngland"}PreviousRouterNextTransformationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/foundational/sequential_chains
356fcd0563bb-0
Router | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/foundational/router
356fcd0563bb-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalLLMRouterSequentialTransformationDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsFoundationalRouterOn this pageRouterThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the next chain to use for a given input. Router chains are made up of two components:The RouterChain itself (responsible for selecting the next chain to call)destination_chains: chains that the router chain can route toIn this notebook we will focus on the different types of routing chains. We will show these routing chains used in a MultiPromptChain to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt.from langchain.chains.router import MultiPromptChainfrom langchain.llms import OpenAIfrom langchain.chains import ConversationChainfrom langchain.chains.llm import LLMChainfrom langchain.prompts import PromptTemplatephysics_template = """You are a very smart physics professor. \You are great at answering questions about physics in a concise and easy to understand manner. \When you don't know the answer to a question you admit that you don't know.Here is a question:{input}"""math_template = """You are a very good mathematician. You are great at answering math questions. \You are so good because you are able to break down hard problems into their component parts, \answer the component parts, and then put them together to answer the broader question.Here is a question:{input}"""prompt_infos = [ { "name": "physics", "description":
https://python.langchain.com/docs/modules/chains/foundational/router
356fcd0563bb-2
"name": "physics", "description": "Good for answering questions about physics", "prompt_template": physics_template, }, { "name": "math", "description": "Good for answering math questions", "prompt_template": math_template, },]llm = OpenAI()destination_chains = {}for p_info in prompt_infos: name = p_info["name"] prompt_template = p_info["prompt_template"] prompt = PromptTemplate(template=prompt_template, input_variables=["input"]) chain = LLMChain(llm=llm, prompt=prompt) destination_chains[name] = chaindefault_chain = ConversationChain(llm=llm, output_key="text")LLMRouterChain​This chain uses an LLM to determine how to route things.from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParserfrom langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATEdestinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]destinations_str = "\n".join(destinations)router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str)router_prompt = PromptTemplate( template=router_template, input_variables=["input"], output_parser=RouterOutputParser(),)router_chain = LLMRouterChain.from_llm(llm, router_prompt)chain = MultiPromptChain( router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True,)print(chain.run("What is
https://python.langchain.com/docs/modules/chains/foundational/router
356fcd0563bb-3
default_chain=default_chain, verbose=True,)print(chain.run("What is black body radiation?")) > Entering new MultiPromptChain chain... physics: {'input': 'What is black body radiation?'} > Finished chain. Black body radiation is the term used to describe the electromagnetic radiation emitted by a “black body�—an object that absorbs all radiation incident upon it. A black body is an idealized physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence. It does not reflect, emit or transmit energy. This type of radiation is the result of the thermal motion of the body's atoms and molecules, and it is emitted at all wavelengths. The spectrum of radiation emitted is described by Planck's law and is known as the black body spectrum.print( chain.run( "What is the first prime number greater than 40 such that one plus the prime number is divisible by 3" )) > Entering new MultiPromptChain chain... math: {'input': 'What is the first prime number greater than 40 such that one plus the prime number is divisible by 3'} > Finished chain. ? The answer is 43. One plus 43 is 44 which is divisible by 3.print(chain.run("What is the name of the type of cloud that rins")) > Entering new MultiPromptChain chain... None: {'input': 'What is the name of the type of cloud that rains?'} > Finished chain. The type of
https://python.langchain.com/docs/modules/chains/foundational/router
356fcd0563bb-4
of cloud that rains?'} > Finished chain. The type of cloud that rains is called a cumulonimbus cloud. It is a tall and dense cloud that is often accompanied by thunder and lightning.EmbeddingRouterChain​The EmbeddingRouterChain uses embeddings and similarity to route between destination chains.from langchain.chains.router.embedding_router import EmbeddingRouterChainfrom langchain.embeddings import CohereEmbeddingsfrom langchain.vectorstores import Chromanames_and_descriptions = [ ("physics", ["for questions about physics"]), ("math", ["for questions about math"]),]router_chain = EmbeddingRouterChain.from_names_and_descriptions( names_and_descriptions, Chroma, CohereEmbeddings(), routing_keys=["input"]) Using embedded DuckDB without persistence: data will be transientchain = MultiPromptChain( router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True,)print(chain.run("What is black body radiation?")) > Entering new MultiPromptChain chain... physics: {'input': 'What is black body radiation?'} > Finished chain. Black body radiation is the emission of energy from an idealized physical body (known as a black body) that is in thermal equilibrium with its environment. It is emitted in a characteristic pattern of frequencies known as a black-body spectrum, which depends only on the temperature of the body. The study of black body radiation is an important part of astrophysics and atmospheric physics, as the thermal radiation emitted by stars and planets can often be approximated as black body radiation.print( chain.run( "What is the first prime number greater than
https://python.langchain.com/docs/modules/chains/foundational/router
356fcd0563bb-5
chain.run( "What is the first prime number greater than 40 such that one plus the prime number is divisible by 3" )) > Entering new MultiPromptChain chain... math: {'input': 'What is the first prime number greater than 40 such that one plus the prime number is divisible by 3'} > Finished chain. ? Answer: The first prime number greater than 40 such that one plus the prime number is divisible by 3 is 43.PreviousLLMNextSequentialLLMRouterChainEmbeddingRouterChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/foundational/router
d4a9a0032504-0
Transformation | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/foundational/transformation
d4a9a0032504-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalLLMRouterSequentialTransformationDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsFoundationalTransformationTransformationThis notebook showcases using a generic transformation chain.As an example, we will create a dummy transformation that takes in a super long text, filters the text to only the first 3 paragraphs, and then passes that into an LLMChain to summarize those.from langchain.chains import TransformChain, LLMChain, SimpleSequentialChainfrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatewith open("../../state_of_the_union.txt") as f: state_of_the_union = f.read()def transform_func(inputs: dict) -> dict: text = inputs["text"] shortened_text = "\n\n".join(text.split("\n\n")[:3]) return {"output_text": shortened_text}transform_chain = TransformChain( input_variables=["text"], output_variables=["output_text"], transform=transform_func)template = """Summarize this text:{output_text}Summary:"""prompt = PromptTemplate(input_variables=["output_text"], template=template)llm_chain = LLMChain(llm=OpenAI(), prompt=prompt)sequential_chain = SimpleSequentialChain(chains=[transform_chain, llm_chain])sequential_chain.run(state_of_the_union) ' The speaker addresses the nation, noting that while last year they were kept apart due to COVID-19, this year they are together again. They are reminded that regardless of their political affiliations, they are all Americans.'PreviousSequentialNextDocumentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/foundational/transformation
0c6257a6776a-0
LLM | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/foundational/llm_chain
0c6257a6776a-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalLLMRouterSequentialTransformationDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsFoundationalLLMOn this pageLLMAn LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.An LLMChain consists of a PromptTemplate and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.Get started​from langchain import PromptTemplate, OpenAI, LLMChainprompt_template = "What is a good name for a company that makes {product}?"llm = OpenAI(temperature=0)llm_chain = LLMChain( llm=llm, prompt=PromptTemplate.from_template(prompt_template))llm_chain("colorful socks") {'product': 'colorful socks', 'text': '\n\nSocktastic!'}Additional ways of running LLM Chain​Aside from __call__ and run methods shared by all Chain object, LLMChain offers a few more ways of calling the chain logic:apply allows you run the chain against a list of inputs:input_list = [ {"product": "socks"}, {"product": "computer"}, {"product": "shoes"}]llm_chain.apply(input_list) [{'text': '\n\nSocktastic!'}, {'text': '\n\nTechCore
https://python.langchain.com/docs/modules/chains/foundational/llm_chain
0c6257a6776a-2
'\n\nSocktastic!'}, {'text': '\n\nTechCore Solutions.'}, {'text': '\n\nFootwear Factory.'}]generate is similar to apply, except it return an LLMResult instead of string. LLMResult often contains useful generation such as token usages and finish reason.llm_chain.generate(input_list) LLMResult(generations=[[Generation(text='\n\nSocktastic!', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nTechCore Solutions.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nFootwear Factory.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'prompt_tokens': 36, 'total_tokens': 55, 'completion_tokens': 19}, 'model_name': 'text-davinci-003'})predict is similar to run method except that the input keys are specified as keyword arguments instead of a Python dict.# Single input examplellm_chain.predict(product="colorful socks") '\n\nSocktastic!'# Multiple inputs exampletemplate = """Tell me a {adjective} joke about {subject}."""prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0))llm_chain.predict(adjective="sad", subject="ducks") '\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.'Parsing the outputs​By default, LLMChain does not parse the output even if the underlying prompt object has an output parser. If you would like to apply that output parser on the LLM output, use
https://python.langchain.com/docs/modules/chains/foundational/llm_chain
0c6257a6776a-3
an output parser. If you would like to apply that output parser on the LLM output, use predict_and_parse instead of predict and apply_and_parse instead of apply. With predict:from langchain.output_parsers import CommaSeparatedListOutputParseroutput_parser = CommaSeparatedListOutputParser()template = """List all the colors in a rainbow"""prompt = PromptTemplate(template=template, input_variables=[], output_parser=output_parser)llm_chain = LLMChain(prompt=prompt, llm=llm)llm_chain.predict() '\n\nRed, orange, yellow, green, blue, indigo, violet'With predict_and_parser:llm_chain.predict_and_parse() ['Red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']Initialize from string​You can also construct an LLMChain from a string template directly.template = """Tell me a {adjective} joke about {subject}."""llm_chain = LLMChain.from_string(llm=llm, template=template)llm_chain.predict(adjective="sad", subject="ducks") '\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.'PreviousFoundationalNextRouterGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/foundational/llm_chain
10498023424d-0
Documents | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/document/
10498023424d-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsStuffRefineMap reduceMap re-rankPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsDocumentsDocumentsThese are the core chains for working with Documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.These chains all implement a common interface:class BaseCombineDocumentsChain(Chain, ABC): """Base interface for chains combining documents.""" @abstractmethod def combine_docs(self, docs: List[Document], **kwargs: Any) -> Tuple[str, dict]: """Combine documents into a single string."""📄� StuffThe stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.📄� RefineThe refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.📄� Map reduceThe map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step). It can optionally first compress, or collapse, the mapped documents to make sure that they fit in the combine documents chain (which will
https://python.langchain.com/docs/modules/chains/document/
10498023424d-2
or collapse, the mapped documents to make sure that they fit in the combine documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary.📄� Map re-rankThe map re-rank documents chain runs an initial prompt on each document, that not only tries to complete a task but also gives a score for how certain it is in its answer. The highest scoring response is returned.PreviousTransformationNextStuffCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/document/
b770ca4fd1a7-0
Map reduce | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsStuffRefineMap reduceMap re-rankPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsDocumentsMap reduceMap reduceThe map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step). It can optionally first compress, or collapse, the mapped documents to make sure that they fit in the combine documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary.PreviousRefineNextMap re-rankCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/document/map_reduce
75bc2bd82034-0
Refine | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsStuffRefineMap reduceMap re-rankPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsDocumentsRefineRefineThe refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.Since the Refine chain only passes a single document to the LLM at a time, it is well-suited for tasks that require analyzing more documents than can fit in the model's context. The obvious tradeoff is that this chain will make far more LLM calls than, for example, the Stuff documents chain. There are also certain tasks which are difficult to accomplish iteratively. For example, the Refine chain can perform poorly when documents frequently cross-reference one another or when a task requires detailed information from many documents.PreviousStuffNextMap reduceCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/document/refine
fb46ee486dca-0
Map re-rank | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsStuffRefineMap reduceMap re-rankPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsDocumentsMap re-rankMap re-rankThe map re-rank documents chain runs an initial prompt on each document, that not only tries to complete a task but also gives a score for how certain it is in its answer. The highest scoring response is returned.PreviousMap reduceNextPopularCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/document/map_rerank
e2576a8d2c80-0
Stuff | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsStuffRefineMap reduceMap re-rankPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsDocumentsStuffStuffThe stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.This chain is well-suited for applications where documents are small and only a few are passed in for most calls.PreviousDocumentsNextRefineCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/document/stuff
59d8dd21386a-0
How to | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/how_to/
59d8dd21386a-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)SerializationFoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsHow toHow to📄� Async APILangChain provides async support for Chains by leveraging the asyncio library.📄� Different call methodsAll classes inherited from Chain offer a few ways of running chain logic. The most direct one is by using call:📄� Custom chainTo implement your own custom chain you can subclass Chain and implement the following methods:📄� Debugging chainsIt can be hard to debug a Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing.📄� Loading from LangChainHubThis notebook covers how to load chains from LangChainHub.📄� Adding memory (state)Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful.📄� SerializationThis notebook covers how to serialize chains to and from disk. The serialization format we use is json or yaml. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time.PreviousChainsNextAsync APICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/how_to/
cbddc1e252b8-0
Different call methods | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/how_to/call_methods
cbddc1e252b8-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)SerializationFoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsHow toDifferent call methodsDifferent call methodsAll classes inherited from Chain offer a few ways of running chain logic. The most direct one is by using __call__:chat = ChatOpenAI(temperature=0)prompt_template = "Tell me a {adjective} joke"llm_chain = LLMChain(llm=chat, prompt=PromptTemplate.from_template(prompt_template))llm_chain(inputs={"adjective": "corny"}) {'adjective': 'corny', 'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}By default, __call__ returns both the input and output key values. You can configure it to only return output key values by setting return_only_outputs to True.llm_chain("corny", return_only_outputs=True) {'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}If the Chain only outputs one output key (i.e. only has one element in its output_keys), you can use run method. Note that run outputs a string instead of a dictionary.# llm_chain only has one output key, so we can use runllm_chain.output_keys ['text']llm_chain.run({"adjective": "corny"}) 'Why did the tomato turn red? Because it saw the salad dressing!'In the case of one input key, you can input the string directly without specifying the input mapping.# These
https://python.langchain.com/docs/modules/chains/how_to/call_methods
cbddc1e252b8-2
the case of one input key, you can input the string directly without specifying the input mapping.# These two are equivalentllm_chain.run({"adjective": "corny"})llm_chain.run("corny")# These two are also equivalentllm_chain("corny")llm_chain({"adjective": "corny"}) {'adjective': 'corny', 'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}Tips: You can easily integrate a Chain object as a Tool in your Agent via its run method. See an example here.PreviousAsync APINextCustom chainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/how_to/call_methods
c8528d21471e-0
Custom chain | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/how_to/custom_chain
c8528d21471e-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)SerializationFoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsHow toCustom chainCustom chainTo implement your own custom chain you can subclass Chain and implement the following methods:from __future__ import annotationsfrom typing import Any, Dict, List, Optionalfrom pydantic import Extrafrom langchain.schema import BaseLanguageModelfrom langchain.callbacks.manager import ( AsyncCallbackManagerForChainRun, CallbackManagerForChainRun,)from langchain.chains.base import Chainfrom langchain.prompts.base import BasePromptTemplateclass MyCustomChain(Chain): """ An example of a custom chain. """ prompt: BasePromptTemplate """Prompt object to use.""" llm: BaseLanguageModel output_key: str = "text" #: :meta private: class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Will be whatever keys the prompt expects. :meta private: """ return self.prompt.input_variables @property def output_keys(self) -> List[str]: """Will always return text
https://python.langchain.com/docs/modules/chains/how_to/custom_chain
c8528d21471e-2
output_keys(self) -> List[str]: """Will always return text key. :meta private: """ return [self.output_key] def _call( self, inputs: Dict[str, Any], run_manager: Optional[CallbackManagerForChainRun] = None, ) -> Dict[str, str]: # Your custom chain logic goes here # This is just an example that mimics LLMChain prompt_value = self.prompt.format_prompt(**inputs) # Whenever you call a language model, or another chain, you should pass # a callback manager to it. This allows the inner run to be tracked by # any callbacks that are registered on the outer run. # You can always obtain a callback manager for this by calling # `run_manager.get_child()` as shown below. response = self.llm.generate_prompt( [prompt_value], callbacks=run_manager.get_child() if run_manager else None ) # If you want to log something about this run, you can do so by calling # methods on the `run_manager`, as shown below. This will trigger any # callbacks that are registered for that event. if run_manager:
https://python.langchain.com/docs/modules/chains/how_to/custom_chain
c8528d21471e-3
if run_manager: run_manager.on_text("Log something about this run") return {self.output_key: response.generations[0][0].text} async def _acall( self, inputs: Dict[str, Any], run_manager: Optional[AsyncCallbackManagerForChainRun] = None, ) -> Dict[str, str]: # Your custom chain logic goes here # This is just an example that mimics LLMChain prompt_value = self.prompt.format_prompt(**inputs) # Whenever you call a language model, or another chain, you should pass # a callback manager to it. This allows the inner run to be tracked by # any callbacks that are registered on the outer run. # You can always obtain a callback manager for this by calling # `run_manager.get_child()` as shown below. response = await self.llm.agenerate_prompt( [prompt_value], callbacks=run_manager.get_child() if run_manager else None ) # If you want to log something about this run, you can do so by calling # methods on the `run_manager`, as shown below. This will trigger any # callbacks that are registered for that event. if run_manager:
https://python.langchain.com/docs/modules/chains/how_to/custom_chain
c8528d21471e-4
that event. if run_manager: await run_manager.on_text("Log something about this run") return {self.output_key: response.generations[0][0].text} @property def _chain_type(self) -> str: return "my_custom_chain"from langchain.callbacks.stdout import StdOutCallbackHandlerfrom langchain.chat_models.openai import ChatOpenAIfrom langchain.prompts.prompt import PromptTemplatechain = MyCustomChain( prompt=PromptTemplate.from_template("tell us a joke about {topic}"), llm=ChatOpenAI(),)chain.run({"topic": "callbacks"}, callbacks=[StdOutCallbackHandler()]) > Entering new MyCustomChain chain... Log something about this run > Finished chain. 'Why did the callback function feel lonely? Because it was always waiting for someone to call it back!'PreviousDifferent call methodsNextDebugging chainsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/how_to/custom_chain
72f82be22281-0
Serialization | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/how_to/serialization
72f82be22281-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)SerializationFoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsHow toSerializationOn this pageSerializationThis notebook covers how to serialize chains to and from disk. The serialization format we use is json or yaml. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time.Saving a chain to disk​First, let's go over how to save a chain to disk. This can be done with the .save method, and specifying a file path with a json or yaml extension.from langchain import PromptTemplate, OpenAI, LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0), verbose=True)llm_chain.save("llm_chain.json")Let's now take a look at what's inside this saved filecat llm_chain.json { "memory": null, "verbose": true, "prompt": { "input_variables": [ "question" ], "output_parser": null,
https://python.langchain.com/docs/modules/chains/how_to/serialization
72f82be22281-2
"output_parser": null, "template": "Question: {question}\n\nAnswer: Let's think step by step.", "template_format": "f-string" }, "llm": { "model_name": "text-davinci-003", "temperature": 0.0, "max_tokens": 256, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0, "n": 1, "best_of": 1, "request_timeout": null, "logit_bias": {}, "_type": "openai" }, "output_key": "text", "_type": "llm_chain" }Loading a chain from disk​We can load a chain from disk by using the load_chain method.from langchain.chains import load_chainchain = load_chain("llm_chain.json")chain.run("whats 2 + 2") > Entering new LLMChain chain... Prompt after formatting:
https://python.langchain.com/docs/modules/chains/how_to/serialization
72f82be22281-3
> Entering new LLMChain chain... Prompt after formatting: Question: whats 2 + 2 Answer: Let's think step by step. > Finished chain. ' 2 + 2 = 4'Saving components separately​In the above example, we can see that the prompt and llm configuration information is saved in the same json as the overall chain. Alternatively, we can split them up and save them separately. This is often useful to make the saved components more modular. In order to do this, we just need to specify llm_path instead of the llm component, and prompt_path instead of the prompt component.llm_chain.prompt.save("prompt.json")cat prompt.json { "input_variables": [ "question" ], "output_parser": null, "template": "Question: {question}\n\nAnswer: Let's think step by step.", "template_format": "f-string" }llm_chain.llm.save("llm.json")cat llm.json { "model_name": "text-davinci-003", "temperature": 0.0, "max_tokens": 256, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0, "n": 1, "best_of":
https://python.langchain.com/docs/modules/chains/how_to/serialization
72f82be22281-4
"n": 1, "best_of": 1, "request_timeout": null, "logit_bias": {}, "_type": "openai" }config = { "memory": None, "verbose": True, "prompt_path": "prompt.json", "llm_path": "llm.json", "output_key": "text", "_type": "llm_chain",}import jsonwith open("llm_chain_separate.json", "w") as f: json.dump(config, f, indent=2)cat llm_chain_separate.json { "memory": null, "verbose": true, "prompt_path": "prompt.json", "llm_path": "llm.json", "output_key": "text", "_type": "llm_chain" }We can then load it in the same waychain = load_chain("llm_chain_separate.json")chain.run("whats 2 + 2") > Entering new LLMChain chain... Prompt after formatting: Question: whats 2 + 2 Answer: Let's think step by step. > Finished chain. ' 2 + 2 = 4'PreviousAdding memory (state)NextFoundationalSaving a chain to diskLoading a chain from diskSaving components separatelyCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ©
https://python.langchain.com/docs/modules/chains/how_to/serialization
72f82be22281-5
from diskSaving components separatelyCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/how_to/serialization
fa4c6d6042ca-0
Adding memory (state) | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)SerializationFoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsHow toAdding memory (state)On this pageAdding memory (state)Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful.Get started​from langchain.chains import ConversationChainfrom langchain.memory import ConversationBufferMemoryconversation = ConversationChain( llm=chat, memory=ConversationBufferMemory())conversation.run("Answer briefly. What are the first 3 colors of a rainbow?")# -> The first three colors of a rainbow are red, orange, and yellow.conversation.run("And the next 4?")# -> The next four colors of a rainbow are green, blue, indigo, and violet. 'The next four colors of a rainbow are green, blue, indigo, and violet.'Essentially, BaseMemory defines an interface of how langchain stores memory. It allows reading of stored data through load_memory_variables method and storing new data through save_context method. You can learn more about it in the Memory section.PreviousLoading from LangChainHubNextSerializationGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/how_to/memory
3d35fee08c4a-0
Debugging chains | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/how_to/debugging
3d35fee08c4a-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)SerializationFoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsHow toDebugging chainsDebugging chainsIt can be hard to debug a Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing.Setting verbose to True will print out some internal states of the Chain object while it is being ran.conversation = ConversationChain( llm=chat, memory=ConversationBufferMemory(), verbose=True)conversation.run("What is ChatGPT?") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: What is ChatGPT? AI: > Finished chain. 'ChatGPT is an AI language model developed by OpenAI. It is based on the GPT-3 architecture and is capable of generating human-like responses to text prompts. ChatGPT has been trained on a massive amount of text data and can understand and respond to a wide range of topics. It is often used for chatbots, virtual assistants, and other conversational AI applications.'PreviousCustom chainNextLoading from LangChainHubCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/how_to/debugging
2b6b4e0e208b-0
Loading from LangChainHub | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/how_to/from_hub
2b6b4e0e208b-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)SerializationFoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsHow toLoading from LangChainHubLoading from LangChainHubThis notebook covers how to load chains from LangChainHub.from langchain.chains import load_chainchain = load_chain("lc://chains/llm-math/chain.json")chain.run("whats 2 raised to .12") > Entering new LLMMathChain chain... whats 2 raised to .12 Answer: 1.0791812460476249 > Finished chain. 'Answer: 1.0791812460476249'Sometimes chains will require extra arguments that were not serialized with the chain. For example, a chain that does question answering over a vector database will require a vector database.from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromafrom langchain.text_splitter import CharacterTextSplitterfrom langchain import OpenAI, VectorDBQAfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()vectorstore = Chroma.from_documents(texts, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient.chain =
https://python.langchain.com/docs/modules/chains/how_to/from_hub
2b6b4e0e208b-2
local API. Using DuckDB in-memory for database. Data will be transient.chain = load_chain("lc://chains/vector-db-qa/stuff/chain.json", vectorstore=vectorstore)query = "What did the president say about Ketanji Brown Jackson"chain.run(query) " The president said that Ketanji Brown Jackson is a Circuit Court of Appeals Judge, one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans, and will continue Justice Breyer's legacy of excellence."PreviousDebugging chainsNextAdding memory (state)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/how_to/from_hub
1070e46081c2-0
Async API | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/how_to/async_chain
1070e46081c2-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)SerializationFoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsHow toAsync APIAsync APILangChain provides async support for Chains by leveraging the asyncio library.Async methods are currently supported in LLMChain (through arun, apredict, acall) and LLMMathChain (through arun and acall), ChatVectorDBChain, and QA chains. Async support for other chains is on the roadmap.import asyncioimport timefrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaindef generate_serially(): llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) for _ in range(5): resp = chain.run(product="toothpaste") print(resp)async def async_generate(chain): resp = await chain.arun(product="toothpaste") print(resp)async def generate_concurrently(): llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name
https://python.langchain.com/docs/modules/chains/how_to/async_chain
1070e46081c2-2
input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) tasks = [async_generate(chain) for _ in range(5)] await asyncio.gather(*tasks)s = time.perf_counter()# If running this outside of Jupyter, use asyncio.run(generate_concurrently())await generate_concurrently()elapsed = time.perf_counter() - sprint("\033[1m" + f"Concurrent executed in {elapsed:0.2f} seconds." + "\033[0m")s = time.perf_counter()generate_serially()elapsed = time.perf_counter() - sprint("\033[1m" + f"Serial executed in {elapsed:0.2f} seconds." + "\033[0m") BrightSmile Toothpaste Company BrightSmile Toothpaste Co. BrightSmile Toothpaste Gleaming Smile Inc. SparkleSmile Toothpaste Concurrent executed in 1.54 seconds. BrightSmile Toothpaste Co. MintyFresh Toothpaste Co. SparkleSmile Toothpaste. Pearly Whites Toothpaste Co. BrightSmile Toothpaste. Serial executed in 6.38 seconds.PreviousHow toNextDifferent call
https://python.langchain.com/docs/modules/chains/how_to/async_chain
1070e46081c2-3
Serial executed in 6.38 seconds.PreviousHow toNextDifferent call methodsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/how_to/async_chain
991945926421-0
Popular | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAPI chainsRetrieval QAConversational Retrieval QAUsing OpenAI functionsSQLSummarizationAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsPopularPopular📄� API chainsAPIChain enables using LLMs to interact with APIs to retrieve relevant information. Construct the chain by providing a question relevant to the provided API documentation.📄� Retrieval QAThis example showcases question answering over an index.📄� Conversational Retrieval QAThe ConversationalRetrievalQA chain builds on RetrievalQAChain to provide a chat history component.📄� Using OpenAI functionsThis walkthrough demonstrates how to incorporate OpenAI function-calling API's in a chain. We'll go over:📄� SQLThis example demonstrates the use of the SQLDatabaseChain for answering questions over a SQL database.📄� SummarizationA summarization chain can be used to summarize multiple documents. One way is to input multiple smaller documents, after they have been divided into chunks, and operate over them with a MapReduceDocumentsChain. You can also choose instead for the chain that does summarization to be a StuffDocumentsChain, or a RefineDocumentsChain.PreviousMap re-rankNextAPI chainsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/popular/
a2555e1d69a8-0
Retrieval QA | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/popular/vector_db_qa
a2555e1d69a8-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAPI chainsRetrieval QAConversational Retrieval QAUsing OpenAI functionsSQLSummarizationAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsPopularRetrieval QARetrieval QAThis example showcases question answering over an index.from langchain.chains import RetrievalQAfrom langchain.document_loaders import TextLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.llms import OpenAIfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Chromaloader = TextLoader("../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()docsearch = Chroma.from_documents(texts, embeddings)qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever())query = "What did the president say about Ketanji Brown Jackson"qa.run(query) " The president said that she is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support, from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."Chain Type​You can easily specify different chain types to load and use in the RetrievalQA chain. For a more detailed walkthrough of these types, please see
https://python.langchain.com/docs/modules/chains/popular/vector_db_qa
a2555e1d69a8-2
and use in the RetrievalQA chain. For a more detailed walkthrough of these types, please see this notebook.There are two ways to load different chain types. First, you can specify the chain type argument in the from_chain_type method. This allows you to pass in the name of the chain type you want to use. For example, in the below we change the chain type to map_reduce.qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="map_reduce", retriever=docsearch.as_retriever())query = "What did the president say about Ketanji Brown Jackson"qa.run(query) " The president said that Judge Ketanji Brown Jackson is one of our nation's top legal minds, a former top litigator in private practice and a former federal public defender, from a family of public school educators and police officers, a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."The above way allows you to really simply change the chain_type, but it doesn't provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in this notebook) and then pass that directly to the the RetrievalQA chain with the combine_documents_chain parameter. For example:from langchain.chains.question_answering import load_qa_chainqa_chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")qa = RetrievalQA(combine_documents_chain=qa_chain, retriever=docsearch.as_retriever())query = "What did the president say about Ketanji Brown Jackson"qa.run(query) " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public
https://python.langchain.com/docs/modules/chains/popular/vector_db_qa
a2555e1d69a8-3
former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."Custom Prompts​You can pass in custom prompts to do question answering. These prompts are the same prompts as you can pass into the base question answering chainfrom langchain.prompts import PromptTemplateprompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.{context}Question: {question}Answer in Italian:"""PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"])chain_type_kwargs = {"prompt": PROMPT}qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever(), chain_type_kwargs=chain_type_kwargs)query = "What did the president say about Ketanji Brown Jackson"qa.run(query) " Il presidente ha detto che Ketanji Brown Jackson è una delle menti legali più importanti del paese, che continuerà l'eccellenza di Justice Breyer e che ha ricevuto un ampio sostegno, da Fraternal Order of Police a ex giudici nominati da democratici e repubblicani."Return Source Documents​Additionally, we can return the source documents used to answer the question by specifying an optional parameter when constructing the chain.qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever(), return_source_documents=True)query = "What did the president
https://python.langchain.com/docs/modules/chains/popular/vector_db_qa
a2555e1d69a8-4
return_source_documents=True)query = "What did the president say about Ketanji Brown Jackson"result = qa({"query": query})result["result"] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice and a former federal public defender from a family of public school educators and police officers, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."result["source_documents"] [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
https://python.langchain.com/docs/modules/chains/popular/vector_db_qa
a2555e1d69a8-5
the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),
https://python.langchain.com/docs/modules/chains/popular/vector_db_qa
a2555e1d69a8-6
metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)]Alternatively, if our document have a "source" metadata key, we can use the RetrievalQAWithSourceChain to cite our sources:docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": f"{i}-pl"} for i in range(len(texts))])from langchain.chains import RetrievalQAWithSourcesChainfrom langchain import OpenAIchain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever())chain({"question": "What did the president say about Justice Breyer"},
https://python.langchain.com/docs/modules/chains/popular/vector_db_qa
a2555e1d69a8-7
"What did the president say about Justice Breyer"}, return_only_outputs=True) {'answer': ' The president honored Justice Breyer for his service and mentioned his legacy of excellence.\n', 'sources': '31-pl'}PreviousAPI chainsNextConversational Retrieval QACommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/popular/vector_db_qa
c3a547b5ea13-0
API chains | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/popular/api
c3a547b5ea13-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAPI chainsRetrieval QAConversational Retrieval QAUsing OpenAI functionsSQLSummarizationAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsPopularAPI chainsAPI chainsAPIChain enables using LLMs to interact with APIs to retrieve relevant information. Construct the chain by providing a question relevant to the provided API documentation.from langchain.chains.api.prompt import API_RESPONSE_PROMPTfrom langchain.chains import APIChainfrom langchain.prompts.prompt import PromptTemplatefrom langchain.llms import OpenAIllm = OpenAI(temperature=0)OpenMeteo Example​from langchain.chains.api import open_meteo_docschain_new = APIChain.from_llm_and_api_docs(llm, open_meteo_docs.OPEN_METEO_DOCS, verbose=True)chain_new.run('What is the weather like right now in Munich, Germany in degrees Fahrenheit?') > Entering new APIChain chain... https://api.open-meteo.com/v1/forecast?latitude=48.1351&longitude=11.5820&temperature_unit=fahrenheit&current_weather=true {"latitude":48.14,"longitude":11.58,"generationtime_ms":0.33104419708251953,"utc_offset_seconds":0,"timezone":"GMT","timezone_abbreviation":"GMT","elevation":521.0,"current_weather":{"temperature":33.4,"windspeed":6.8,"winddirection":198.0,"weathercode":2,"time":"2023-01-16T01:00"}} > Finished chain. ' The
https://python.langchain.com/docs/modules/chains/popular/api
c3a547b5ea13-2
> Finished chain. ' The current temperature in Munich, Germany is 33.4 degrees Fahrenheit with a windspeed of 6.8 km/h and a wind direction of 198 degrees. The weathercode is 2.'TMDB Example​import osos.environ['TMDB_BEARER_TOKEN'] = ""from langchain.chains.api import tmdb_docsheaders = {"Authorization": f"Bearer {os.environ['TMDB_BEARER_TOKEN']}"}chain = APIChain.from_llm_and_api_docs(llm, tmdb_docs.TMDB_DOCS, headers=headers, verbose=True)chain.run("Search for 'Avatar'") > Entering new APIChain chain... https://api.themoviedb.org/3/search/movie?query=Avatar&language=en-US {"page":1,"results":[{"adult":false,"backdrop_path":"/o0s4XsEDfDlvit5pDRKjzXR4pp2.jpg","genre_ids":[28,12,14,878],"id":19995,"original_language":"en","original_title":"Avatar","overview":"In the 22nd century, a paraplegic Marine is dispatched to the moon Pandora on a unique mission, but becomes torn between following orders and protecting an alien civilization.","popularity":2041.691,"poster_path":"/jRXYjXNq0Cs2TcJjLkki24MLp7u.jpg","release_date":"2009-12-15","title":"Avatar","video":false,"vote_average":7.6,"vote_count":27777},{"adult":false,"backdrop_path":"/s16H6tpK2utvwDtzZ8Qy4qm5Emw.jpg","genre_ids":[878,12,28],"id":76600,"original_language":"en","original_title":"Avatar:
https://python.langchain.com/docs/modules/chains/popular/api
c3a547b5ea13-3
The Way of Water","overview":"Set more than a decade after the events of the first film, learn the story of the Sully family (Jake, Neytiri, and their kids), the trouble that follows them, the lengths they go to keep each other safe, the battles they fight to stay alive, and the tragedies they endure.","popularity":3948.296,"poster_path":"/t6HIqrRAclMCA60NsSmeqe9RmNV.jpg","release_date":"2022-12-14","title":"Avatar: The Way of Water","video":false,"vote_average":7.7,"vote_count":4219},{"adult":false,"backdrop_path":"/uEwGFGtao9YG2JolmdvtHLLVbA9.jpg","genre_ids":[99],"id":111332,"original_language":"en","original_title":"Avatar: Creating the World of Pandora","overview":"The Making-of James Cameron's Avatar. It shows interesting parts of the work on the set.","popularity":541.809,"poster_path":"/sjf3xjuofCtDhZghJRzXlTiEjJe.jpg","release_date":"2010-02-07","title":"Avatar: Creating the World of Pandora","video":false,"vote_average":7.3,"vote_count":35},{"adult":false,"backdrop_path":null,"genre_ids":[99],"id":287003,"original_language":"en","original_title":"Avatar: Scene Deconstruction","overview":"The deconstruction of the Avatar scenes and sets","popularity":394.941,"poster_path":"/uCreCQFReeF0RiIXkQypRYHwikx.jpg","release_date":"2009-12-18","title":"Avatar: Scene
https://python.langchain.com/docs/modules/chains/popular/api
c3a547b5ea13-4
Scene Deconstruction","video":false,"vote_average":7.8,"vote_count":12},{"adult":false,"backdrop_path":null,"genre_ids":[28,18,878,12,14],"id":83533,"original_language":"en","original_title":"Avatar 3","overview":"","popularity":172.488,"poster_path":"/4rXqTMlkEaMiJjiG0Z2BX6F6Dkm.jpg","release_date":"2024-12-18","title":"Avatar 3","video":false,"vote_average":0,"vote_count":0},{"adult":false,"backdrop_path":null,"genre_ids":[28,878,12,14],"id":216527,"original_language":"en","original_title":"Avatar 4","overview":"","popularity":162.536,"poster_path":"/qzMYKnT4MG1d0gnhwytr4cKhUvS.jpg","release_date":"2026-12-16","title":"Avatar 4","video":false,"vote_average":0,"vote_count":0},{"adult":false,"backdrop_path":null,"genre_ids":[28,12,14,878],"id":393209,"original_language":"en","original_title":"Avatar 5","overview":"","popularity":124.722,"poster_path":"/rtmmvqkIC5zDMEd638Es2woxbz8.jpg","release_date":"2028-12-20","title":"Avatar 5","video":false,"vote_average":0,"vote_count":0},{"adult":false,"backdrop_path":"/nNceJtrrovG1MUBHMAhId0ws9Gp.jpg","genre_ids":[99],"id":183392,"original_language":"en","original_title":"Capturing Avatar","overview":"Capturing Avatar is a feature length behind-the-scenes documentary about the making of Avatar. It uses footage from
https://python.langchain.com/docs/modules/chains/popular/api
c3a547b5ea13-5
Avatar is a feature length behind-the-scenes documentary about the making of Avatar. It uses footage from the film's development, as well as stock footage from as far back as the production of Titanic in 1995. Also included are numerous interviews with cast, artists, and other crew members. The documentary was released as a bonus feature on the extended collector's edition of Avatar.","popularity":109.842,"poster_path":"/26SMEXJl3978dn2svWBSqHbLl5U.jpg","release_date":"2010-11-16","title":"Capturing Avatar","video":false,"vote_average":7.8,"vote_count":39},{"adult":false,"backdrop_path":"/eoAvHxfbaPOcfiQyjqypWIXWxDr.jpg","genre_ids":[99],"id":1059673,"original_language":"en","original_title":"Avatar: The Deep Dive - A Special Edition of 20/20","overview":"An inside look at one of the most anticipated movie sequels ever with James Cameron and cast.","popularity":629.825,"poster_path":"/rtVeIsmeXnpjNbEKnm9Say58XjV.jpg","release_date":"2022-12-14","title":"Avatar: The Deep Dive - A Special Edition of 20/20","video":false,"vote_average":6.5,"vote_count":5},{"adult":false,"backdrop_path":null,"genre_ids":[99],"id":278698,"original_language":"en","original_title":"Avatar Spirits","overview":"Bryan Konietzko and Michael Dante DiMartino, co-creators of the hit television series, Avatar: The Last Airbender, reflect on the creation of the masterful
https://python.langchain.com/docs/modules/chains/popular/api
c3a547b5ea13-6
hit television series, Avatar: The Last Airbender, reflect on the creation of the masterful series.","popularity":51.593,"poster_path":"/oBWVyOdntLJd5bBpE0wkpN6B6vy.jpg","release_date":"2010-06-22","title":"Avatar Spirits","video":false,"vote_average":9,"vote_count":16},{"adult":false,"backdrop_path":"/cACUWJKvRfhXge7NC0xxoQnkQNu.jpg","genre_ids":[10402],"id":993545,"original_language":"fr","original_title":"Avatar - Au Hellfest 2022","overview":"","popularity":21.992,"poster_path":"/fw6cPIsQYKjd1YVQanG2vLc5HGo.jpg","release_date":"2022-06-26","title":"Avatar - Au Hellfest 2022","video":false,"vote_average":8,"vote_count":4},{"adult":false,"backdrop_path":null,"genre_ids":[],"id":931019,"original_language":"en","original_title":"Avatar: Enter The World","overview":"A behind the scenes look at the new James Cameron blockbuster “Avatar�, which stars Aussie Sam Worthington. Hastily produced by Australia’s Nine Network following the film’s release.","popularity":30.903,"poster_path":"/9MHY9pYAgs91Ef7YFGWEbP4WJqC.jpg","release_date":"2009-12-05","title":"Avatar: Enter The World","video":false,"vote_average":2,"vote_count":1},{"adult":false,"backdrop_path":null,"genre_ids":[],"id":287004,"original_language":"en","original_title":"Avatar: Production Materials","overview":"Production material overview of what was used in
https://python.langchain.com/docs/modules/chains/popular/api
c3a547b5ea13-7
Production Materials","overview":"Production material overview of what was used in Avatar","popularity":12.389,"poster_path":null,"release_date":"2009-12-18","title":"Avatar: Production Materials","video":true,"vote_average":6,"vote_count":4},{"adult":false,"backdrop_path":"/x43RWEZg9tYRPgnm43GyIB4tlER.jpg","genre_ids":[],"id":740017,"original_language":"es","original_title":"Avatar: Agni Kai","overview":"","popularity":9.462,"poster_path":"/y9PrKMUTA6NfIe5FE92tdwOQ2sH.jpg","release_date":"2020-01-18","title":"Avatar: Agni Kai","video":false,"vote_average":7,"vote_count":1},{"adult":false,"backdrop_path":"/e8mmDO7fKK93T4lnxl4Z2zjxXZV.jpg","genre_ids":[],"id":668297,"original_language":"en","original_title":"The Last Avatar","overview":"The Last Avatar is a mystical adventure film, a story of a young man who leaves Hollywood to find himself. What he finds is beyond his wildest imagination. Based on ancient prophecy, contemporary truth seeking and the future of humanity, The Last Avatar is a film that takes transformational themes and makes them relevant for audiences of all ages. Filled with love, magic, mystery, conspiracy, psychics, underground cities, secret societies, light bodies and much more, The Last Avatar tells the story of the emergence of Kalki Avatar- the final Avatar of our current Age of Chaos. Kalki is also a metaphor for the innate power and potential that lies within humanity to awaken and create a world of truth, harmony and
https://python.langchain.com/docs/modules/chains/popular/api
c3a547b5ea13-8
the innate power and potential that lies within humanity to awaken and create a world of truth, harmony and possibility.","popularity":8.786,"poster_path":"/XWz5SS5g5mrNEZjv3FiGhqCMOQ.jpg","release_date":"2014-12-06","title":"The Last Avatar","video":false,"vote_average":4.5,"vote_count":2},{"adult":false,"backdrop_path":null,"genre_ids":[],"id":424768,"original_language":"en","original_title":"Avatar:[2015] Wacken Open Air","overview":"Started in the summer of 2001 by drummer John Alfredsson and vocalist Christian Rimmi under the name Lost Soul. The band offers a free mp3 download to a song called \"Bloody Knuckles\" if one subscribes to their newsletter. In 2005 they appeared on the compilation “Listen to Your Inner Voice� together with 17 other bands released by Inner Voice Records.","popularity":6.634,"poster_path":null,"release_date":"2015-08-01","title":"Avatar:[2015] Wacken Open Air","video":false,"vote_average":8,"vote_count":1},{"adult":false,"backdrop_path":null,"genre_ids":[],"id":812836,"original_language":"en","original_title":"Avatar - Live At Graspop 2018","overview":"Live At Graspop Festival Belgium 2018","popularity":9.855,"poster_path":null,"release_date":"","title":"Avatar - Live At Graspop 2018","video":false,"vote_average":9,"vote_count":1},{"adult":false,"backdrop_path":null,"genre_ids":[10402],"id":874770,"original_language":"en","original_title":"Avatar Ages: Memories","overview":"On the night of memories Avatar performed songs from Thoughts of No
https://python.langchain.com/docs/modules/chains/popular/api
c3a547b5ea13-9
Ages: Memories","overview":"On the night of memories Avatar performed songs from Thoughts of No Tomorrow, Schlacht and Avatar as voted on by the fans.","popularity":2.66,"poster_path":"/xDNNQ2cnxAv3o7u0nT6JJacQrhp.jpg","release_date":"2021-01-30","title":"Avatar Ages: Memories","video":false,"vote_average":10,"vote_count":1},{"adult":false,"backdrop_path":null,"genre_ids":[10402],"id":874768,"original_language":"en","original_title":"Avatar Ages: Madness","overview":"On the night of madness Avatar performed songs from Black Waltz and Hail The Apocalypse as voted on by the fans.","popularity":2.024,"poster_path":"/wVyTuruUctV3UbdzE5cncnpyNoY.jpg","release_date":"2021-01-23","title":"Avatar Ages: Madness","video":false,"vote_average":8,"vote_count":1},{"adult":false,"backdrop_path":"/dj8g4jrYMfK6tQ26ra3IaqOx5Ho.jpg","genre_ids":[10402],"id":874700,"original_language":"en","original_title":"Avatar Ages: Dreams","overview":"On the night of dreams Avatar performed Hunter Gatherer in its entirety, plus a selection of their most popular songs. Originally aired January 9th 2021","popularity":1.957,"poster_path":"/4twG59wnuHpGIRR9gYsqZnVysSP.jpg","release_date":"2021-01-09","title":"Avatar Ages: Dreams","video":false,"vote_average":0,"vote_count":0}],"total_pages":3,"total_results":57} > Finished chain. ' This response contains 57 movies related to the search query
https://python.langchain.com/docs/modules/chains/popular/api
c3a547b5ea13-10
> Finished chain. ' This response contains 57 movies related to the search query "Avatar". The first movie in the list is the 2009 movie "Avatar" starring Sam Worthington. Other movies in the list include sequels to Avatar, documentaries, and live performances.'Listen API Example​import osfrom langchain.llms import OpenAIfrom langchain.chains.api import podcast_docsfrom langchain.chains import APIChain# Get api key here: https://www.listennotes.com/api/pricing/listen_api_key = 'xxx'llm = OpenAI(temperature=0)headers = {"X-ListenAPI-Key": listen_api_key}chain = APIChain.from_llm_and_api_docs(llm, podcast_docs.PODCAST_DOCS, headers=headers, verbose=True)chain.run("Search for 'silicon valley bank' podcast episodes, audio length is more than 30 minutes, return only 1 results")PreviousPopularNextRetrieval QACommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/popular/api
4da1e06043c4-0
Conversational Retrieval QA | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/popular/chat_vector_db
4da1e06043c4-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAPI chainsRetrieval QAConversational Retrieval QAUsing OpenAI functionsSQLSummarizationAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsPopularConversational Retrieval QAConversational Retrieval QAThe ConversationalRetrievalQA chain builds on RetrievalQAChain to provide a chat history component.It first combines the chat history (either explicitly passed in or retrieved from the provided memory) and the question into a standalone question, then looks up relevant documents from the retriever, and finally passes those documents and the question to a question answering chain to return a response.To create one, you will need a retriever. In the below example, we will create one from a vector store, which can be created from embeddings.from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromafrom langchain.text_splitter import CharacterTextSplitterfrom langchain.llms import OpenAIfrom langchain.chains import ConversationalRetrievalChainLoad in documents. You can replace this with a loader for whatever type of data you wantfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../state_of_the_union.txt")documents = loader.load()If you had multiple loaders that you wanted to combine, you do something like:# loaders = [....]# docs = []# for loader in loaders:# docs.extend(loader.load())We now split the documents, create embeddings for them, and put them in a vectorstore. This allows us to do semantic search over them.text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)documents =
https://python.langchain.com/docs/modules/chains/popular/chat_vector_db
4da1e06043c4-2
= CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)documents = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()vectorstore = Chroma.from_documents(documents, embeddings) Using embedded DuckDB without persistence: data will be transientWe can now create a memory object, which is necessary to track the inputs/outputs and hold a conversation.from langchain.memory import ConversationBufferMemorymemory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)We now initialize the ConversationalRetrievalChainqa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), memory=memory)query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query})result["answer"] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."query = "Did he mention who she succeeded"result = qa({"question": query})result['answer'] ' Ketanji Brown Jackson succeeded Justice Stephen Breyer on the United States Supreme Court.'Pass in chat history​In the above example, we used a Memory object to track chat history. We can also just pass it in explicitly. In order to do this, we need to initialize a chain without any memory object.qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever())Here's an example of asking a question with no chat historychat_history = []query = "What did the president say about
https://python.langchain.com/docs/modules/chains/popular/chat_vector_db
4da1e06043c4-3
asking a question with no chat historychat_history = []query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query, "chat_history": chat_history})result["answer"] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."Here's an example of asking a question with some chat historychat_history = [(query, result["answer"])]query = "Did he mention who she succeeded"result = qa({"question": query, "chat_history": chat_history})result['answer'] ' Ketanji Brown Jackson succeeded Justice Stephen Breyer on the United States Supreme Court.'Using a different model for condensing the question​This chain has two steps. First, it condenses the current question and the chat history into a standalone question. This is necessary to create a standanlone vector to use for retrieval. After that, it does retrieval and then answers the question using retrieval augmented generation with a separate model. Part of the power of the declarative nature of LangChain is that you can easily use a separate language model for each call. This can be useful to use a cheaper and faster model for the simpler task of condensing the question, and then a more expensive model for answering the question. Here is an example of doing so.from langchain.chat_models import ChatOpenAIqa = ConversationalRetrievalChain.from_llm( ChatOpenAI(temperature=0, model="gpt-4"), vectorstore.as_retriever(), condense_question_llm =
https://python.langchain.com/docs/modules/chains/popular/chat_vector_db
4da1e06043c4-4
vectorstore.as_retriever(), condense_question_llm = ChatOpenAI(temperature=0, model='gpt-3.5-turbo'),)chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query, "chat_history": chat_history})chat_history = [(query, result["answer"])]query = "Did he mention who she succeeded"result = qa({"question": query, "chat_history": chat_history})Return Source Documents​You can also easily return source documents from the ConversationalRetrievalChain. This is useful for when you want to inspect what documents were returned.qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True)chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query, "chat_history": chat_history})result['source_documents'][0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice
https://python.langchain.com/docs/modules/chains/popular/chat_vector_db
4da1e06043c4-5
Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../state_of_the_union.txt'})ConversationalRetrievalChain with search_distance​If you are using a vector store that supports filtering by search distance, you can add a threshold value parameter.vectordbkwargs = {"search_distance": 0.9}qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True)chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query, "chat_history": chat_history, "vectordbkwargs": vectordbkwargs})ConversationalRetrievalChain with map_reduce​We can also use different types of combine document chains with the ConversationalRetrievalChain chain.from langchain.chains import LLMChainfrom langchain.chains.question_answering import load_qa_chainfrom langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPTllm = OpenAI(temperature=0)question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)doc_chain = load_qa_chain(llm, chain_type="map_reduce")chain = ConversationalRetrievalChain( retriever=vectorstore.as_retriever(), question_generator=question_generator, combine_docs_chain=doc_chain,)chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = chain({"question": query, "chat_history": chat_history})result['answer'] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former
https://python.langchain.com/docs/modules/chains/popular/chat_vector_db
4da1e06043c4-6
one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."ConversationalRetrievalChain with Question Answering with sources​You can also use this chain with the question answering with sources chain.from langchain.chains.qa_with_sources import load_qa_with_sources_chainllm = OpenAI(temperature=0)question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)doc_chain = load_qa_with_sources_chain(llm, chain_type="map_reduce")chain = ConversationalRetrievalChain( retriever=vectorstore.as_retriever(), question_generator=question_generator, combine_docs_chain=doc_chain,)chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = chain({"question": query, "chat_history": chat_history})result['answer'] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \nSOURCES: ../../state_of_the_union.txt"ConversationalRetrievalChain with streaming to stdout​Output from the chain will be streamed to stdout token by token in this example.from langchain.chains.llm import LLMChainfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerfrom langchain.chains.conversational_retrieval.prompts import
https://python.langchain.com/docs/modules/chains/popular/chat_vector_db
4da1e06043c4-7
import StreamingStdOutCallbackHandlerfrom langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT, QA_PROMPTfrom langchain.chains.question_answering import load_qa_chain# Construct a ConversationalRetrievalChain with a streaming llm for combine docs# and a separate, non-streaming llm for question generationllm = OpenAI(temperature=0)streaming_llm = OpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0)question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)doc_chain = load_qa_chain(streaming_llm, chain_type="stuff", prompt=QA_PROMPT)qa = ConversationalRetrievalChain( retriever=vectorstore.as_retriever(), combine_docs_chain=doc_chain, question_generator=question_generator)chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query, "chat_history": chat_history}) The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.chat_history = [(query, result["answer"])]query = "Did he mention who she succeeded"result = qa({"question": query, "chat_history": chat_history}) Ketanji Brown Jackson succeeded Justice Stephen Breyer on the United States Supreme Court.get_chat_history Function​You can also specify a get_chat_history function, which can be used to format the chat_history string.def get_chat_history(inputs) -> str: res
https://python.langchain.com/docs/modules/chains/popular/chat_vector_db
4da1e06043c4-8
used to format the chat_history string.def get_chat_history(inputs) -> str: res = [] for human, ai in inputs: res.append(f"Human:{human}\nAI:{ai}") return "\n".join(res)qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), get_chat_history=get_chat_history)chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query, "chat_history": chat_history})result['answer'] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."PreviousRetrieval QANextUsing OpenAI functionsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/popular/chat_vector_db
23d1bce49020-0
SQL | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/popular/sqlite
23d1bce49020-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAPI chainsRetrieval QAConversational Retrieval QAUsing OpenAI functionsSQLSummarizationAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsPopularSQLSQLThis example demonstrates the use of the SQLDatabaseChain for answering questions over a SQL database.Under the hood, LangChain uses SQLAlchemy to connect to SQL databases. The SQLDatabaseChain can therefore be used with any SQL dialect supported by SQLAlchemy, such as MS SQL, MySQL, MariaDB, PostgreSQL, Oracle SQL, Databricks and SQLite. Please refer to the SQLAlchemy documentation for more information about requirements for connecting to your database. For example, a connection to MySQL requires an appropriate connector such as PyMySQL. A URI for a MySQL connection might look like: mysql+pymysql://user:pass@some_mysql_db_address/db_name.This demonstration uses SQLite and the example Chinook database.
https://python.langchain.com/docs/modules/chains/popular/sqlite
23d1bce49020-2
To set it up, follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file in a notebooks folder at the root of this repository.from langchain import OpenAI, SQLDatabase, SQLDatabaseChaindb = SQLDatabase.from_uri("sqlite:///../../../../notebooks/Chinook.db")llm = OpenAI(temperature=0, verbose=True)NOTE: For data-sensitive projects, you can specify return_direct=True in the SQLDatabaseChain initialization to directly return the output of the SQL query without any additional formatting. This prevents the LLM from seeing any contents within the database. Note, however, the LLM still has access to the database scheme (i.e. dialect, table and key names) by default.db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)db_chain.run("How many employees are there?") > Entering new SQLDatabaseChain chain... How many employees are there? SQLQuery: /workspace/langchain/langchain/sql_database.py:191: SAWarning: Dialect sqlite+pysqlite does *not* support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage. sample_rows = connection.execute(command) SELECT COUNT(*) FROM "Employee"; SQLResult: [(8,)] Answer:There are 8 employees. > Finished chain. 'There are 8 employees.'Use Query Checker​Sometimes the Language Model generates invalid SQL with small mistakes that can be self-corrected using the same technique used by the SQL Database Agent to try and fix the SQL using the LLM. You can simply specify this option when creating the
https://python.langchain.com/docs/modules/chains/popular/sqlite
23d1bce49020-3
to try and fix the SQL using the LLM. You can simply specify this option when creating the chain:db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, use_query_checker=True)db_chain.run("How many albums by Aerosmith?") > Entering new SQLDatabaseChain chain... How many albums by Aerosmith? SQLQuery:SELECT COUNT(*) FROM Album WHERE ArtistId = 3; SQLResult: [(1,)] Answer:There is 1 album by Aerosmith. > Finished chain. 'There is 1 album by Aerosmith.'Customize Prompt​You can also customize the prompt that is used. Here is an example prompting it to understand that foobar is the same as the Employee tablefrom langchain.prompts.prompt import PromptTemplate_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.Use the following format:Question: "Question here"SQLQuery: "SQL Query to run"SQLResult: "Result of the SQLQuery"Answer: "Final answer here"Only use the following tables:{table_info}If someone asks for the table foobar, they really mean the employee table.Question: {input}"""PROMPT = PromptTemplate( input_variables=["input", "table_info", "dialect"], template=_DEFAULT_TEMPLATE)db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True)db_chain.run("How many employees are there in the foobar table?") > Entering new SQLDatabaseChain chain... How many employees are there in the foobar table?
https://python.langchain.com/docs/modules/chains/popular/sqlite
23d1bce49020-4
chain... How many employees are there in the foobar table? SQLQuery:SELECT COUNT(*) FROM Employee; SQLResult: [(8,)] Answer:There are 8 employees in the foobar table. > Finished chain. 'There are 8 employees in the foobar table.'Return Intermediate Steps​You can also return the intermediate steps of the SQLDatabaseChain. This allows you to access the SQL statement that was generated, as well as the result of running that against the SQL Database.db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, use_query_checker=True, return_intermediate_steps=True)result = db_chain("How many employees are there in the foobar table?")result["intermediate_steps"] > Entering new SQLDatabaseChain chain... How many employees are there in the foobar table? SQLQuery:SELECT COUNT(*) FROM Employee; SQLResult: [(8,)] Answer:There are 8 employees in the foobar table. > Finished chain. [{'input': 'How many employees are there in the foobar table?\nSQLQuery:SELECT COUNT(*) FROM Employee;\nSQLResult: [(8,)]\nAnswer:', 'top_k': '5', 'dialect': 'sqlite', 'table_info': '\nCREATE TABLE "Artist" (\n\t"ArtistId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(120), \n\tPRIMARY KEY ("ArtistId")\n)\n\n/*\n3 rows from Artist
https://python.langchain.com/docs/modules/chains/popular/sqlite
23d1bce49020-5
KEY ("ArtistId")\n)\n\n/*\n3 rows from Artist table:\nArtistId\tName\n1\tAC/DC\n2\tAccept\n3\tAerosmith\n*/\n\n\nCREATE TABLE "Employee" (\n\t"EmployeeId" INTEGER NOT NULL, \n\t"LastName" NVARCHAR(20) NOT NULL, \n\t"FirstName" NVARCHAR(20) NOT NULL, \n\t"Title" NVARCHAR(30), \n\t"ReportsTo" INTEGER, \n\t"BirthDate" DATETIME, \n\t"HireDate" DATETIME, \n\t"Address" NVARCHAR(70), \n\t"City" NVARCHAR(40), \n\t"State" NVARCHAR(40), \n\t"Country" NVARCHAR(40), \n\t"PostalCode" NVARCHAR(10), \n\t"Phone" NVARCHAR(24), \n\t"Fax" NVARCHAR(24), \n\t"Email" NVARCHAR(60), \n\tPRIMARY KEY ("EmployeeId"), \n\tFOREIGN KEY("ReportsTo") REFERENCES "Employee" ("EmployeeId")\n)\n\n/*\n3 rows from Employee table:\nEmployeeId\tLastName\tFirstName\tTitle\tReportsTo\tBirthDate\tHireDate\tAddress\tCity\tState\tCountry\tPostalCode\tPhone\tFax\tEmail\n1\tAdams\tAndrew\tGeneral Manager\tNone\t1962-02-18 00:00:00\t2002-08-14 00:00:00\t11120 Jasper Ave NW\tEdmonton\tAB\tCanada\tT5K 2N1\t+1 (780) 428-9482\t+1 (780) 428-3457\tandrew@chinookcorp.com\n2\tEdwards\tNancy\tSales
https://python.langchain.com/docs/modules/chains/popular/sqlite
23d1bce49020-6
Manager\t1\t1958-12-08 00:00:00\t2002-05-01 00:00:00\t825 8 Ave SW\tCalgary\tAB\tCanada\tT2P 2T3\t+1 (403) 262-3443\t+1 (403) 262-3322\tnancy@chinookcorp.com\n3\tPeacock\tJane\tSales Support Agent\t2\t1973-08-29 00:00:00\t2002-04-01 00:00:00\t1111 6 Ave SW\tCalgary\tAB\tCanada\tT2P 5M5\t+1 (403) 262-3443\t+1 (403) 262-6712\tjane@chinookcorp.com\n*/\n\n\nCREATE TABLE "Genre" (\n\t"GenreId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(120), \n\tPRIMARY KEY ("GenreId")\n)\n\n/*\n3 rows from Genre table:\nGenreId\tName\n1\tRock\n2\tJazz\n3\tMetal\n*/\n\n\nCREATE TABLE "MediaType" (\n\t"MediaTypeId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(120), \n\tPRIMARY KEY ("MediaTypeId")\n)\n\n/*\n3 rows from MediaType table:\nMediaTypeId\tName\n1\tMPEG audio file\n2\tProtected AAC audio file\n3\tProtected MPEG-4 video file\n*/\n\n\nCREATE TABLE "Playlist" (\n\t"PlaylistId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(120), \n\tPRIMARY KEY ("PlaylistId")\n)\n\n/*\n3 rows from Playlist
https://python.langchain.com/docs/modules/chains/popular/sqlite
23d1bce49020-7
KEY ("PlaylistId")\n)\n\n/*\n3 rows from Playlist table:\nPlaylistId\tName\n1\tMusic\n2\tMovies\n3\tTV Shows\n*/\n\n\nCREATE TABLE "Album" (\n\t"AlbumId" INTEGER NOT NULL, \n\t"Title" NVARCHAR(160) NOT NULL, \n\t"ArtistId" INTEGER NOT NULL, \n\tPRIMARY KEY ("AlbumId"), \n\tFOREIGN KEY("ArtistId") REFERENCES "Artist" ("ArtistId")\n)\n\n/*\n3 rows from Album table:\nAlbumId\tTitle\tArtistId\n1\tFor Those About To Rock We Salute You\t1\n2\tBalls to the Wall\t2\n3\tRestless and Wild\t2\n*/\n\n\nCREATE TABLE "Customer" (\n\t"CustomerId" INTEGER NOT NULL, \n\t"FirstName" NVARCHAR(40) NOT NULL, \n\t"LastName" NVARCHAR(20) NOT NULL, \n\t"Company" NVARCHAR(80), \n\t"Address" NVARCHAR(70), \n\t"City" NVARCHAR(40), \n\t"State" NVARCHAR(40), \n\t"Country" NVARCHAR(40), \n\t"PostalCode" NVARCHAR(10), \n\t"Phone" NVARCHAR(24), \n\t"Fax" NVARCHAR(24), \n\t"Email" NVARCHAR(60) NOT NULL, \n\t"SupportRepId" INTEGER, \n\tPRIMARY KEY ("CustomerId"), \n\tFOREIGN KEY("SupportRepId") REFERENCES "Employee" ("EmployeeId")\n)\n\n/*\n3 rows from Customer
https://python.langchain.com/docs/modules/chains/popular/sqlite
23d1bce49020-8
REFERENCES "Employee" ("EmployeeId")\n)\n\n/*\n3 rows from Customer table:\nCustomerId\tFirstName\tLastName\tCompany\tAddress\tCity\tState\tCountry\tPostalCode\tPhone\tFax\tEmail\tSupportRepId\n1\tLuís\tGonçalves\tEmbraer - Empresa Brasileira de Aeronáutica S.A.\tAv. Brigadeiro Faria Lima, 2170\tSão José dos Campos\tSP\tBrazil\t12227-000\t+55 (12) 3923-5555\t+55 (12) 3923-5566\tluisg@embraer.com.br\t3\n2\tLeonie\tKöhler\tNone\tTheodor-Heuss-Straße 34\tStuttgart\tNone\tGermany\t70174\t+49 0711 2842222\tNone\tleonekohler@surfeu.de\t5\n3\tFrançois\tTremblay\tNone\t1498 rue Bélanger\tMontréal\tQC\tCanada\tH2G 1A7\t+1 (514) 721-4711\tNone\tftremblay@gmail.com\t3\n*/\n\n\nCREATE TABLE "Invoice" (\n\t"InvoiceId" INTEGER NOT NULL, \n\t"CustomerId" INTEGER NOT NULL, \n\t"InvoiceDate" DATETIME NOT NULL, \n\t"BillingAddress" NVARCHAR(70), \n\t"BillingCity" NVARCHAR(40), \n\t"BillingState" NVARCHAR(40), \n\t"BillingCountry" NVARCHAR(40), \n\t"BillingPostalCode" NVARCHAR(10), \n\t"Total" NUMERIC(10, 2) NOT NULL, \n\tPRIMARY
https://python.langchain.com/docs/modules/chains/popular/sqlite
23d1bce49020-9
NUMERIC(10, 2) NOT NULL, \n\tPRIMARY KEY ("InvoiceId"), \n\tFOREIGN KEY("CustomerId") REFERENCES "Customer" ("CustomerId")\n)\n\n/*\n3 rows from Invoice table:\nInvoiceId\tCustomerId\tInvoiceDate\tBillingAddress\tBillingCity\tBillingState\tBillingCountry\tBillingPostalCode\tTotal\n1\t2\t2009-01-01 00:00:00\tTheodor-Heuss-Straße 34\tStuttgart\tNone\tGermany\t70174\t1.98\n2\t4\t2009-01-02 00:00:00\tUllevålsveien 14\tOslo\tNone\tNorway\t0171\t3.96\n3\t8\t2009-01-03 00:00:00\tGrétrystraat 63\tBrussels\tNone\tBelgium\t1000\t5.94\n*/\n\n\nCREATE TABLE "Track" (\n\t"TrackId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(200) NOT NULL, \n\t"AlbumId" INTEGER, \n\t"MediaTypeId" INTEGER NOT NULL, \n\t"GenreId" INTEGER, \n\t"Composer" NVARCHAR(220), \n\t"Milliseconds" INTEGER NOT NULL, \n\t"Bytes" INTEGER, \n\t"UnitPrice" NUMERIC(10, 2) NOT NULL, \n\tPRIMARY KEY ("TrackId"), \n\tFOREIGN KEY("MediaTypeId") REFERENCES "MediaType" ("MediaTypeId"), \n\tFOREIGN KEY("GenreId") REFERENCES "Genre" ("GenreId"), \n\tFOREIGN KEY("AlbumId") REFERENCES "Album" ("AlbumId")\n)\n\n/*\n3 rows from Track
https://python.langchain.com/docs/modules/chains/popular/sqlite
23d1bce49020-10
REFERENCES "Album" ("AlbumId")\n)\n\n/*\n3 rows from Track table:\nTrackId\tName\tAlbumId\tMediaTypeId\tGenreId\tComposer\tMilliseconds\tBytes\tUnitPrice\n1\tFor Those About To Rock (We Salute You)\t1\t1\t1\tAngus Young, Malcolm Young, Brian Johnson\t343719\t11170334\t0.99\n2\tBalls to the Wall\t2\t2\t1\tNone\t342562\t5510424\t0.99\n3\tFast As a Shark\t3\t2\t1\tF. Baltes, S. Kaufman, U. Dirkscneider & W. Hoffman\t230619\t3990994\t0.99\n*/\n\n\nCREATE TABLE "InvoiceLine" (\n\t"InvoiceLineId" INTEGER NOT NULL, \n\t"InvoiceId" INTEGER NOT NULL, \n\t"TrackId" INTEGER NOT NULL, \n\t"UnitPrice" NUMERIC(10, 2) NOT NULL, \n\t"Quantity" INTEGER NOT NULL, \n\tPRIMARY KEY ("InvoiceLineId"), \n\tFOREIGN KEY("TrackId") REFERENCES "Track" ("TrackId"), \n\tFOREIGN KEY("InvoiceId") REFERENCES "Invoice" ("InvoiceId")\n)\n\n/*\n3 rows from InvoiceLine table:\nInvoiceLineId\tInvoiceId\tTrackId\tUnitPrice\tQuantity\n1\t1\t2\t0.99\t1\n2\t1\t4\t0.99\t1\n3\t2\t6\t0.99\t1\n*/\n\n\nCREATE TABLE "PlaylistTrack" (\n\t"PlaylistId" INTEGER NOT NULL, \n\t"TrackId" INTEGER NOT NULL, \n\tPRIMARY KEY ("PlaylistId", "TrackId"), \n\tFOREIGN KEY("TrackId")
https://python.langchain.com/docs/modules/chains/popular/sqlite