id
stringlengths
14
15
text
stringlengths
27
2.12k
source
stringlengths
49
118
b9e9af00ccda-7
been traveling for over 13 billion years to reach us. • JWST has provided us with the first images of exoplanets, which are planets outside of our own solar system. These distant worlds were first discovered in 1992, and the JWST has allowed us to see them in greater detail. These discoveries can spark a child's imagination about the infinite wonders of the universe. > Entering new SequentialChain chain... > Entering new LLMChain chain... Prompt after formatting: Given some text, extract a list of facts from the text. Format your output as a bulleted list. Text: """ Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST): • In 2023, The JWST spotted a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas. • The telescope captured images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us. • JWST has provided us with the first images of exoplanets, which are planets outside of our own solar system. These distant worlds were first discovered in 1992, and the JWST has allowed us to see them in greater detail. These discoveries can spark a child's imagination about the infinite wonders of the universe. """ Facts: > Finished chain.
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-8
> Finished chain. > Entering new LLMChain chain... Prompt after formatting: You are an expert fact checker. You have been hired by a major news organization to fact check a very important story. Here is a bullet point list of facts: """ • The James Webb Space Telescope (JWST) spotted a number of galaxies nicknamed "green peas." • The light from these galaxies has been traveling for over 13 billion years to reach us. • JWST has provided us with the first images of exoplanets, which are planets outside of our own solar system. • Exoplanets were first discovered in 1992. • The JWST has allowed us to see exoplanets in greater detail. """ For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined". If the fact is false, explain why. > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction. Checked Assertions: """ • The James Webb Space Telescope (JWST) spotted a number of galaxies nicknamed "green peas." - True
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-9
spotted a number of galaxies nicknamed "green peas." - True • The light from these galaxies has been traveling for over 13 billion years to reach us. - True • JWST has provided us with the first images of exoplanets, which are planets outside of our own solar system. - False. The first exoplanet was discovered in 1992, but the first images of exoplanets were taken by the Hubble Space Telescope in 2004. • Exoplanets were first discovered in 1992. - True • The JWST has allowed us to see exoplanets in greater detail. - Undetermined. The JWST has not yet been launched, so it is not yet known how much detail it will be able to provide. """ Original Summary: """ Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST): • In 2023, The JWST spotted a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas. • The telescope captured images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us. • JWST has provided us with the first images of exoplanets, which are planets outside of our own solar system. These distant worlds were first discovered in 1992, and the JWST has allowed us to see them in greater detail. These discoveries can spark a child's imagination about the infinite wonders of the
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-10
greater detail. These discoveries can spark a child's imagination about the infinite wonders of the universe. """ Using these checked assertions, rewrite the original summary to be completely true. The output should have the same structure and formatting as the original summary. Summary: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true or false. If all of the assertions are true, return "True". If any of the assertions are false, return "False". Here are some examples: === Checked Assertions: """ - The sky is red: False - Water is made of lava: False - The sun is a star: True """ Result: False === Checked Assertions: """ - The sky is blue: True - Water is wet: True - The sun is a star: True """ Result: True === Checked Assertions: """ - The sky is blue - True - Water is made of lava- False - The sun is a star - True """ Result: False === Checked Assertions:""" • The James Webb Space Telescope (JWST) spotted a number of galaxies nicknamed
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-11
• The James Webb Space Telescope (JWST) spotted a number of galaxies nicknamed "green peas." - True • The light from these galaxies has been traveling for over 13 billion years to reach us. - True • JWST has provided us with the first images of exoplanets, which are planets outside of our own solar system. - False. The first exoplanet was discovered in 1992, but the first images of exoplanets were taken by the Hubble Space Telescope in 2004. • Exoplanets were first discovered in 1992. - True • The JWST has allowed us to see exoplanets in greater detail. - Undetermined. The JWST has not yet been launched, so it is not yet known how much detail it will be able to provide. """ Result: > Finished chain. > Finished chain. Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST): • In 2023, The JWST will spot a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas. • The telescope will capture images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us. • Exoplanets, which are planets outside of our own solar system, were first discovered in 1992. The JWST will allow us to see them in greater detail when it is
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-12
in 1992. The JWST will allow us to see them in greater detail when it is launched in 2023. These discoveries can spark a child's imagination about the infinite wonders of the universe. > Finished chain. 'Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST):\n• In 2023, The JWST will spot a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas.\n• The telescope will capture images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us.\n• Exoplanets, which are planets outside of our own solar system, were first discovered in 1992. The JWST will allow us to see them in greater detail when it is launched in 2023.\nThese discoveries can spark a child\'s imagination about the infinite wonders of the universe.'from langchain.chains import LLMSummarizationCheckerChainfrom langchain.llms import OpenAIllm = OpenAI(temperature=0)checker_chain = LLMSummarizationCheckerChain.from_llm(llm, verbose=True, max_checks=3)text = "The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. It is the smallest of the five oceans and is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-13
The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea."checker_chain.run(text) > Entering new LLMSummarizationCheckerChain chain... > Entering new SequentialChain chain... > Entering new LLMChain chain... Prompt after formatting: Given some text, extract a list of facts from the text. Format your output as a bulleted list. Text: """ The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. It is the smallest of the five oceans and is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea. """ Facts: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: You are an expert fact checker. You have been hired by a major news organization to fact check a very important story. Here is a bullet point list of facts:
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-14
important story. Here is a bullet point list of facts: """ - The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. - It has an area of 465,000 square miles. - It is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. - It is the smallest of the five oceans. - It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. - The sea is named after the island of Greenland. - It is the Arctic Ocean's main outlet to the Atlantic. - It is often frozen over so navigation is limited. - It is considered the northern branch of the Norwegian Sea. """ For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined". If the fact is false, explain why. > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction. Checked Assertions: """ - The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-15
Norway, the Svalbard archipelago and Greenland. True - It has an area of 465,000 square miles. True - It is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. False - The Greenland Sea is not an ocean, it is an arm of the Arctic Ocean. - It is the smallest of the five oceans. False - The Greenland Sea is not an ocean, it is an arm of the Arctic Ocean. - It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True - The sea is named after the island of Greenland. True - It is the Arctic Ocean's main outlet to the Atlantic. True - It is often frozen over so navigation is limited. True - It is considered the northern branch of the Norwegian Sea. True """ Original Summary: """ The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. It is the smallest of the five oceans and is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea. """
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-16
is limited, and is considered the northern branch of the Norwegian Sea. """ Using these checked assertions, rewrite the original summary to be completely true. The output should have the same structure and formatting as the original summary. Summary: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true or false. If all of the assertions are true, return "True". If any of the assertions are false, return "False". Here are some examples: === Checked Assertions: """ - The sky is red: False - Water is made of lava: False - The sun is a star: True """ Result: False === Checked Assertions: """ - The sky is blue: True - Water is wet: True - The sun is a star: True """ Result: True === Checked Assertions: """ - The sky is blue - True - Water is made of lava- False - The sun is a star - True """ Result: False === Checked Assertions:""" - The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-17
portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True - It has an area of 465,000 square miles. True - It is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. False - The Greenland Sea is not an ocean, it is an arm of the Arctic Ocean. - It is the smallest of the five oceans. False - The Greenland Sea is not an ocean, it is an arm of the Arctic Ocean. - It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True - The sea is named after the island of Greenland. True - It is the Arctic Ocean's main outlet to the Atlantic. True - It is often frozen over so navigation is limited. True - It is considered the northern branch of the Norwegian Sea. True """ Result: > Finished chain. > Finished chain. The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea.
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-18
navigation is limited, and is considered the northern branch of the Norwegian Sea. > Entering new SequentialChain chain... > Entering new LLMChain chain... Prompt after formatting: Given some text, extract a list of facts from the text. Format your output as a bulleted list. Text: """ The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea. """ Facts: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: You are an expert fact checker. You have been hired by a major news organization to fact check a very important story. Here is a bullet point list of facts: """ - The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. - It has an area of 465,000 square miles. - It is an arm of the Arctic Ocean. - It is covered almost entirely by
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-19
- It is an arm of the Arctic Ocean. - It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. - It is named after the island of Greenland. - It is the Arctic Ocean's main outlet to the Atlantic. - It is often frozen over so navigation is limited. - It is considered the northern branch of the Norwegian Sea. """ For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined". If the fact is false, explain why. > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction. Checked Assertions: """ - The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True - It has an area of 465,000 square miles. True - It is an arm of the Arctic Ocean. True - It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True - It is named after the island of Greenland. False - It is named after the country of Greenland. - It is the Arctic Ocean's main outlet to
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-20
of Greenland. - It is the Arctic Ocean's main outlet to the Atlantic. True - It is often frozen over so navigation is limited. True - It is considered the northern branch of the Norwegian Sea. False - It is considered the northern branch of the Atlantic Ocean. """ Original Summary: """ The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea. """ Using these checked assertions, rewrite the original summary to be completely true. The output should have the same structure and formatting as the original summary. Summary: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true or false. If all of the assertions are true, return "True". If any of the assertions are false, return "False". Here are some examples: === Checked Assertions: """ - The sky is red: False - Water
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-21
Checked Assertions: """ - The sky is red: False - Water is made of lava: False - The sun is a star: True """ Result: False === Checked Assertions: """ - The sky is blue: True - Water is wet: True - The sun is a star: True """ Result: True === Checked Assertions: """ - The sky is blue - True - Water is made of lava- False - The sun is a star - True """ Result: False === Checked Assertions:""" - The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True - It has an area of 465,000 square miles. True - It is an arm of the Arctic Ocean. True - It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True - It is named after the island of Greenland. False - It is named after the country of Greenland. - It is the Arctic Ocean's main outlet to the Atlantic. True - It is often frozen over so navigation is limited. True - It is considered the northern branch of the Norwegian Sea. False - It is considered the northern branch of the Atlantic Ocean. """
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-22
False - It is considered the northern branch of the Atlantic Ocean. """ Result: > Finished chain. > Finished chain. The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Atlantic Ocean. > Entering new SequentialChain chain... > Entering new LLMChain chain... Prompt after formatting: Given some text, extract a list of facts from the text. Format your output as a bulleted list. Text: """ The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Atlantic Ocean. """ Facts:
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-23
of the Atlantic Ocean. """ Facts: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: You are an expert fact checker. You have been hired by a major news organization to fact check a very important story. Here is a bullet point list of facts: """ - The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. - It has an area of 465,000 square miles. - It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. - The sea is named after the country of Greenland. - It is the Arctic Ocean's main outlet to the Atlantic. - It is often frozen over so navigation is limited. - It is considered the northern branch of the Atlantic Ocean. """ For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined". If the fact is false, explain why. > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction. Checked Assertions: """
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-24
Checked Assertions: """ - The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True - It has an area of 465,000 square miles. True - It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True - The sea is named after the country of Greenland. True - It is the Arctic Ocean's main outlet to the Atlantic. False - The Arctic Ocean's main outlet to the Atlantic is the Barents Sea. - It is often frozen over so navigation is limited. True - It is considered the northern branch of the Atlantic Ocean. False - The Greenland Sea is considered part of the Arctic Ocean, not the Atlantic Ocean. """ Original Summary: """ The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Atlantic Ocean. """ Using these checked assertions, rewrite the original summary to be completely true. The output should have the same structure and formatting
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-25
be completely true. The output should have the same structure and formatting as the original summary. Summary: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true or false. If all of the assertions are true, return "True". If any of the assertions are false, return "False". Here are some examples: === Checked Assertions: """ - The sky is red: False - Water is made of lava: False - The sun is a star: True """ Result: False === Checked Assertions: """ - The sky is blue: True - Water is wet: True - The sun is a star: True """ Result: True === Checked Assertions: """ - The sky is blue - True - Water is made of lava- False - The sun is a star - True """ Result: False === Checked Assertions:""" - The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True - It has an area of 465,000 square miles. True - It is covered
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-26
of 465,000 square miles. True - It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True - The sea is named after the country of Greenland. True - It is the Arctic Ocean's main outlet to the Atlantic. False - The Arctic Ocean's main outlet to the Atlantic is the Barents Sea. - It is often frozen over so navigation is limited. True - It is considered the northern branch of the Atlantic Ocean. False - The Greenland Sea is considered part of the Arctic Ocean, not the Atlantic Ocean. """ Result: > Finished chain. > Finished chain. The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the Barents Sea. It is often frozen over so navigation is limited, and is considered part of the Arctic Ocean. > Finished chain. "The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-27
The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the Barents Sea. It is often frozen over so navigation is limited, and is considered part of the Arctic Ocean."from langchain.chains import LLMSummarizationCheckerChainfrom langchain.llms import OpenAIllm = OpenAI(temperature=0)checker_chain = LLMSummarizationCheckerChain.from_llm(llm, max_checks=3, verbose=True)text = "Mammals can lay eggs, birds can lay eggs, therefore birds are mammals."checker_chain.run(text) > Entering new LLMSummarizationCheckerChain chain... > Entering new SequentialChain chain... > Entering new LLMChain chain... Prompt after formatting: Given some text, extract a list of facts from the text. Format your output as a bulleted list. Text: """ Mammals can lay eggs, birds can lay eggs, therefore birds are mammals. """ Facts: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: You are an expert fact checker. You have been hired by a major news organization to fact check a very important story. Here is a bullet point list of facts: """ - Mammals can lay eggs - Birds can lay eggs - Birds are mammals """ For each
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-28
- Birds are mammals """ For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined". If the fact is false, explain why. > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction. Checked Assertions: """ - Mammals can lay eggs: False. Mammals are not capable of laying eggs, as they give birth to live young. - Birds can lay eggs: True. Birds are capable of laying eggs. - Birds are mammals: False. Birds are not mammals, they are a class of their own. """ Original Summary: """ Mammals can lay eggs, birds can lay eggs, therefore birds are mammals. """ Using these checked assertions, rewrite the original summary to be completely true. The output should have the same structure and formatting as the original summary. Summary: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true or false.
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-29
assertions that have been fact checked and are labeled as true or false. If all of the assertions are true, return "True". If any of the assertions are false, return "False". Here are some examples: === Checked Assertions: """ - The sky is red: False - Water is made of lava: False - The sun is a star: True """ Result: False === Checked Assertions: """ - The sky is blue: True - Water is wet: True - The sun is a star: True """ Result: True === Checked Assertions: """ - The sky is blue - True - Water is made of lava- False - The sun is a star - True """ Result: False === Checked Assertions:""" - Mammals can lay eggs: False. Mammals are not capable of laying eggs, as they give birth to live young. - Birds can lay eggs: True. Birds are capable of laying eggs. - Birds are mammals: False. Birds are not mammals, they are a class of their own. """ Result: > Finished chain. > Finished chain. Birds and mammals are both capable of laying eggs, however birds are not mammals, they are a class of their own.
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-30
eggs, however birds are not mammals, they are a class of their own. > Entering new SequentialChain chain... > Entering new LLMChain chain... Prompt after formatting: Given some text, extract a list of facts from the text. Format your output as a bulleted list. Text: """ Birds and mammals are both capable of laying eggs, however birds are not mammals, they are a class of their own. """ Facts: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: You are an expert fact checker. You have been hired by a major news organization to fact check a very important story. Here is a bullet point list of facts: """ - Birds and mammals are both capable of laying eggs. - Birds are not mammals. - Birds are a class of their own. """ For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined". If the fact is false, explain why. > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true of false. If the answer is
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-31
some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction. Checked Assertions: """ - Birds and mammals are both capable of laying eggs: False. Mammals give birth to live young, while birds lay eggs. - Birds are not mammals: True. Birds are a class of their own, separate from mammals. - Birds are a class of their own: True. Birds are a class of their own, separate from mammals. """ Original Summary: """ Birds and mammals are both capable of laying eggs, however birds are not mammals, they are a class of their own. """ Using these checked assertions, rewrite the original summary to be completely true. The output should have the same structure and formatting as the original summary. Summary: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true or false. If all of the assertions are true, return "True". If any of the assertions are false, return "False". Here are some examples: === Checked Assertions: """ - The sky is red: False - Water is made of lava: False - The sun is a star: True """ Result: False
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
b9e9af00ccda-32
a star: True """ Result: False === Checked Assertions: """ - The sky is blue: True - Water is wet: True - The sun is a star: True """ Result: True === Checked Assertions: """ - The sky is blue - True - Water is made of lava- False - The sun is a star - True """ Result: False === Checked Assertions:""" - Birds and mammals are both capable of laying eggs: False. Mammals give birth to live young, while birds lay eggs. - Birds are not mammals: True. Birds are a class of their own, separate from mammals. - Birds are a class of their own: True. Birds are a class of their own, separate from mammals. """ Result: > Finished chain. > Finished chain. > Finished chain. 'Birds are not mammals, but they are a class of their own. They lay eggs, unlike mammals which give birth to live young.'PreviousHTTP request chainNextLLM Symbolic MathCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/additional/llm_summarization_checker
e7ea722c5c40-0
Graph DB QA chain | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/additional/graph_cypher_qa
e7ea722c5c40-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalGraph DB QA chainOn this pageGraph DB QA chainThis notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language.You will need to have a running Neo4j instance. One option is to create a free Neo4j database instance in their Aura cloud service. You can also run the database locally using the Neo4j Desktop application, or running a docker container.
https://python.langchain.com/docs/modules/chains/additional/graph_cypher_qa
e7ea722c5c40-2
You can run a local docker container by running the executing the following script:docker run \ --name neo4j \ -p 7474:7474 -p 7687:7687 \ -d \ -e NEO4J_AUTH=neo4j/pleaseletmein \ -e NEO4J_PLUGINS=\[\"apoc\"\] \ neo4j:latestIf you are using the docker container, you need to wait a couple of second for the database to start.from langchain.chat_models import ChatOpenAIfrom langchain.chains import GraphCypherQAChainfrom langchain.graphs import Neo4jGraphgraph = Neo4jGraph( url="bolt://localhost:7687", username="neo4j", password="pleaseletmein")Seeding the database​Assuming your database is empty, you can populate it using Cypher query language. The following Cypher statement is idempotent, which means the database information will be the same if you run it one or multiple times.graph.query( """MERGE (m:Movie {name:"Top Gun"})WITH mUNWIND ["Tom Cruise", "Val Kilmer", "Anthony Edwards", "Meg Ryan"] AS actorMERGE (a:Actor {name:actor})MERGE (a)-[:ACTED_IN]->(m)""") []Refresh graph schema information​If the schema of database changes, you can refresh the schema information needed to generate Cypher statements.graph.refresh_schema()print(graph.get_schema) Node properties are the following: [{'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Movie'},
https://python.langchain.com/docs/modules/chains/additional/graph_cypher_qa
e7ea722c5c40-3
[{'property': 'name', 'type': 'STRING'}], 'labels': 'Movie'}, {'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Actor'}] Relationship properties are the following: [] The relationships are the following: ['(:Actor)-[:ACTED_IN]->(:Movie)'] Querying the graph​We can now use the graph cypher QA chain to ask question of the graphchain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Tom Cruise'}] > Finished chain. 'Val Kilmer, Anthony Edwards, Meg Ryan, and Tom Cruise played in Top Gun.'Limit the number of results​You can limit the number of results from the Cypher QA Chain using the top_k parameter.
https://python.langchain.com/docs/modules/chains/additional/graph_cypher_qa
e7ea722c5c40-4
The default is 10.chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, top_k=2)chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}] > Finished chain. 'Val Kilmer and Anthony Edwards played in Top Gun.'Return intermediate results​You can return intermediate steps from the Cypher QA Chain using the return_intermediate_steps parameterchain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_intermediate_steps=True)result = chain("Who played in Top Gun?")print(f"Intermediate steps: {result['intermediate_steps']}")print(f"Final answer: {result['result']}") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Tom Cruise'}] > Finished chain. Intermediate steps: [{'query': "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie
https://python.langchain.com/docs/modules/chains/additional/graph_cypher_qa
e7ea722c5c40-5
[{'query': "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\nRETURN a.name"}, {'context': [{'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Tom Cruise'}]}] Final answer: Val Kilmer, Anthony Edwards, Meg Ryan, and Tom Cruise played in Top Gun.Return direct results​You can return direct results from the Cypher QA Chain using the return_direct parameterchain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_direct=True)chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name > Finished chain. [{'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Tom Cruise'}]PreviousArangoDB QA chainNextHugeGraph QA ChainSeeding the databaseRefresh graph schema informationQuerying the graphLimit the number of resultsReturn intermediate resultsReturn direct resultsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/additional/graph_cypher_qa
ca061add0d60-0
Document QA | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/additional/question_answering
ca061add0d60-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalDocument QAOn this pageDocument QAHere we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains.Prepare Data​First we prepare the data. For this example we do similarity search over a vector database, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents).from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Chromafrom langchain.docstore.document import Documentfrom langchain.prompts import PromptTemplatefrom langchain.indexes.vectorstore import VectorstoreIndexCreatorwith open("../../state_of_the_union.txt") as f:
https://python.langchain.com/docs/modules/chains/additional/question_answering
ca061add0d60-2
import VectorstoreIndexCreatorwith open("../../state_of_the_union.txt") as f: state_of_the_union = f.read()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)embeddings = OpenAIEmbeddings()docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))]).as_retriever() Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient.query = "What did the president say about Justice Breyer"docs = docsearch.get_relevant_documents(query)from langchain.chains.question_answering import load_qa_chainfrom langchain.llms import OpenAIQuickstart​If you just want to get started as quickly as possible, this is the recommended way to do it:chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")query = "What did the president say about Justice Breyer"chain.run(input_documents=docs, question=query) ' The president said that Justice Breyer has dedicated his life to serve the country and thanked him for his service.'If you want more control and understanding over what is happening, please see the information below.The stuff Chain​This sections shows results of using the stuff Chain to do question answering.chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president said that Justice Breyer has dedicated his life to serve the country and thanked him for his service.'}Custom PromptsYou can also use your own
https://python.langchain.com/docs/modules/chains/additional/question_answering
ca061add0d60-3
serve the country and thanked him for his service.'}Custom PromptsYou can also use your own prompts with this chain. In this example, we will respond in Italian.prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.{context}Question: {question}Answer in Italian:"""PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"])chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff", prompt=PROMPT)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha ricevuto una vasta gamma di supporto.'}The map_reduce Chain​This sections shows results of using the map_reduce Chain to do question answering.chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce")query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president said that Justice Breyer is an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court, and thanked him for his service.'}Intermediate StepsWe can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_map_steps variable.chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce", return_map_steps=True)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': [' "Tonight, I’d
https://python.langchain.com/docs/modules/chains/additional/question_answering
ca061add0d60-4
{'intermediate_steps': [' "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."', ' A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.', ' None', ' None'], 'output_text': ' The president said that Justice Breyer is an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court, and thanked him for his service.'}Custom PromptsYou can also use your own prompts with this chain. In this example, we will respond in Italian.question_prompt_template = """Use the following portion of a long document to see if any of the text is relevant to answer the question. Return any relevant text translated into italian.{context}Question: {question}Relevant text, if any, in Italian:"""QUESTION_PROMPT = PromptTemplate( template=question_prompt_template, input_variables=["context", "question"])combine_prompt_template = """Given the following extracted parts of a long document and a question, create a final answer italian. If you don't know the answer, just say that you don't know. Don't try to make up an answer.QUESTION: {question}========={summaries}=========Answer in Italian:"""COMBINE_PROMPT = PromptTemplate( template=combine_prompt_template, input_variables=["summaries", "question"])chain =
https://python.langchain.com/docs/modules/chains/additional/question_answering
ca061add0d60-5
template=combine_prompt_template, input_variables=["summaries", "question"])chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce", return_map_steps=True, question_prompt=QUESTION_PROMPT, combine_prompt=COMBINE_PROMPT)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ["\nStasera vorrei onorare qualcuno che ha dedicato la sua vita a servire questo paese: il giustizia Stephen Breyer - un veterano dell'esercito, uno studioso costituzionale e un giustizia in uscita della Corte Suprema degli Stati Uniti. Giustizia Breyer, grazie per il tuo servizio.", '\nNessun testo pertinente.', ' Non ha detto nulla riguardo a Justice Breyer.', " Non c'è testo pertinente."], 'output_text': ' Non ha detto nulla riguardo a Justice Breyer.'}Batch SizeWhen using the map_reduce chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so:llm = OpenAI(batch_size=5, temperature=0)The refine Chain​This sections shows results of using the refine Chain to do question answering.chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine")query = "What did the president say about Justice Breyer"chain({"input_documents": docs,
https://python.langchain.com/docs/modules/chains/additional/question_answering
ca061add0d60-6
= "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which he said would be the most sweeping investment to rebuild America in history and would help the country compete for the jobs of the 21st Century.'}Intermediate StepsWe can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_refine_steps variable.chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine", return_refine_steps=True)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country and his legacy of excellence.', '\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice.', '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans.', '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and
https://python.langchain.com/docs/modules/chains/additional/question_answering
ca061add0d60-7
for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which is the most sweeping investment to rebuild America in history.'], 'output_text': '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which is the most sweeping investment to rebuild America in history.'}Custom PromptsYou can also use your own prompts with this chain. In this example, we will respond in Italian.refine_prompt_template = ( "The original question is as follows: {question}\n" "We have provided an existing answer: {existing_answer}\n" "We have the opportunity to refine the existing answer" "(only if needed) with some more context below.\n" "------------\n" "{context_str}\n" "------------\n" "Given the new context, refine the original answer to better " "answer the question. " "If the context isn't useful, return the original answer. Reply in Italian.")refine_prompt = PromptTemplate( input_variables=["question", "existing_answer", "context_str"], template=refine_prompt_template,)initial_qa_template = ( "Context information is below. \n" "---------------------\n"
https://python.langchain.com/docs/modules/chains/additional/question_answering
ca061add0d60-8
"Context information is below. \n" "---------------------\n" "{context_str}" "\n---------------------\n" "Given the context information and not prior knowledge, " "answer the question: {question}\nYour answer should be in Italian.\n")initial_qa_prompt = PromptTemplate( input_variables=["context_str", "question"], template=initial_qa_template)chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine", return_refine_steps=True, question_prompt=initial_qa_prompt, refine_prompt=refine_prompt)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha reso omaggio al suo servizio.', "\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione.", "\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la
https://python.langchain.com/docs/modules/chains/additional/question_answering
ca061add0d60-9
di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei.", "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei e per investire in America, educare gli americani, far crescere la forza lavoro e costruire l'economia dal"], 'output_text': "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al
https://python.langchain.com/docs/modules/chains/additional/question_answering
ca061add0d60-10
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei e per investire in America, educare gli americani, far crescere la forza lavoro e costruire l'economia dal"}The map-rerank Chain​This sections shows results of using the map-rerank Chain to do question answering with sources.chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_rerank", return_intermediate_steps=True)query = "What did the president say about Justice Breyer"results = chain({"input_documents": docs, "question": query}, return_only_outputs=True)results["output_text"] ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.'results["intermediate_steps"] [{'answer': ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.', 'score': '100'}, {'answer': ' This document does not answer the question', 'score': '0'},
https://python.langchain.com/docs/modules/chains/additional/question_answering
ca061add0d60-11
{'answer': ' This document does not answer the question', 'score': '0'}, {'answer': ' This document does not answer the question', 'score': '0'}, {'answer': ' This document does not answer the question', 'score': '0'}]Custom PromptsYou can also use your own prompts with this chain. In this example, we will respond in Italian.from langchain.output_parsers import RegexParseroutput_parser = RegexParser( regex=r"(.*?)\nScore: (.*)", output_keys=["answer", "score"],)prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.In addition to giving an answer, also return a score of how fully it answered the user's question. This should be in the following format:Question: [question here]Helpful Answer In Italian: [answer here]Score: [score between 0 and 100]Begin!Context:---------{context}---------Question: {question}Helpful Answer In Italian:"""PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"], output_parser=output_parser,)chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_rerank", return_intermediate_steps=True, prompt=PROMPT)query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': [{'answer': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese.', 'score': '100'}, {'answer':
https://python.langchain.com/docs/modules/chains/additional/question_answering
ca061add0d60-12
'score': '100'}, {'answer': ' Il presidente non ha detto nulla sulla Giustizia Breyer.', 'score': '100'}, {'answer': ' Non so.', 'score': '0'}, {'answer': ' Non so.', 'score': '0'}], 'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese.'}Document QA with sources​We can also perform document QA and return the sources that were used to answer the question. To do this we'll just need to make sure each document has a "source" key in the metadata, and we'll use the load_qa_with_sources helper to construct our chain:docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))])query = "What did the president say about Justice Breyer"docs = docsearch.similarity_search(query)from langchain.chains.qa_with_sources import load_qa_with_sources_chainchain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff")query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'}PreviousQuestion-Answering CitationsNextTaggingDocument QA with sourcesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/additional/question_answering
581464dcab1c-0
Bash chain | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/additional/llm_bash
581464dcab1c-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalBash chainOn this pageBash chainThis notebook showcases using LLMs and a bash process to perform simple filesystem commands.from langchain.chains import LLMBashChainfrom langchain.llms import OpenAIllm = OpenAI(temperature=0)text = "Please write a bash script that prints 'Hello World' to the console."bash_chain = LLMBashChain.from_llm(llm, verbose=True)bash_chain.run(text) > Entering new LLMBashChain chain... Please write a bash script that prints 'Hello World' to the console. ```bash echo "Hello World" ``` Code: ['echo "Hello
https://python.langchain.com/docs/modules/chains/additional/llm_bash
581464dcab1c-2
echo "Hello World" ``` Code: ['echo "Hello World"'] Answer: Hello World > Finished chain. 'Hello World\n'Customize Prompt​You can also customize the prompt that is used. Here is an example prompting to avoid using the 'echo' utilityfrom langchain.prompts.prompt import PromptTemplatefrom langchain.chains.llm_bash.prompt import BashOutputParser_PROMPT_TEMPLATE = """If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put "#!/bin/bash" in your answer. Make sure to reason step by step, using this format:Question: "copy the files in the directory named 'target' into a new directory at the same level as target called 'myNewDirectory'"I need to take the following actions:- List all files in the directory- Create a new directory- Copy the files from the first directory into the second directory```bashlsmkdir myNewDirectorycp -r target/* myNewDirectoryDo not use 'echo' when writing the script.That is the format. Begin!
https://python.langchain.com/docs/modules/chains/additional/llm_bash
581464dcab1c-3
Question: {question}"""PROMPT = PromptTemplate( input_variables=["question"], template=_PROMPT_TEMPLATE, output_parser=BashOutputParser(),
https://python.langchain.com/docs/modules/chains/additional/llm_bash
581464dcab1c-4
)```pythonbash_chain = LLMBashChain.from_llm(llm, prompt=PROMPT, verbose=True)text = "Please write a bash script that prints 'Hello World' to the console."bash_chain.run(text) > Entering new LLMBashChain chain... Please write a bash script that prints 'Hello World' to the console. ```bash printf "Hello World\n" ``` Code: ['printf "Hello World\\n"'] Answer: Hello World > Finished chain. 'Hello World\n'Persistent Terminal​By default, the chain will run in a separate subprocess each time it is called. This behavior can be changed by instantiating with a persistent bash process.from langchain.utilities.bash import BashProcesspersistent_process = BashProcess(persistent=True)bash_chain = LLMBashChain.from_llm(llm, bash_process=persistent_process, verbose=True)text = "List the current directory then move up a level."bash_chain.run(text) > Entering new LLMBashChain chain... List the current directory then move up a level. ```bash ls cd .. ``` Code: ['ls', 'cd ..'] Answer: api.html llm_summarization_checker.html constitutional_chain.html moderation.html llm_bash.html openai_openapi.yaml llm_checker.html openapi.html llm_math.html
https://python.langchain.com/docs/modules/chains/additional/llm_bash
581464dcab1c-5
openapi.html llm_math.html pal.html llm_requests.html sqlite.html > Finished chain. 'api.html\t\t\tllm_summarization_checker.html\r\nconstitutional_chain.html\tmoderation.html\r\nllm_bash.html\t\t\topenai_openapi.yaml\r\nllm_checker.html\t\topenapi.html\r\nllm_math.html\t\t\tpal.html\r\nllm_requests.html\t\tsqlite.html'# Run the same command again and see that the state is maintained between callsbash_chain.run(text) > Entering new LLMBashChain chain... List the current directory then move up a level. ```bash ls cd .. ``` Code: ['ls', 'cd ..'] Answer: examples getting_started.html index_examples generic how_to_guides.rst > Finished chain. 'examples\t\tgetting_started.html\tindex_examples\r\ngeneric\t\t\thow_to_guides.rst'PreviousHypothetical Document EmbeddingsNextSelf-checking chainCustomize PromptPersistent TerminalCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/additional/llm_bash
176414394179-0
Hypothetical Document Embeddings | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/additional/hyde
176414394179-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalHypothetical Document EmbeddingsOn this pageHypothetical Document EmbeddingsThis notebook goes over how to use Hypothetical Document Embeddings (HyDE), as described in this paper. At a high level, HyDE is an embedding technique that takes queries, generates a hypothetical answer, and then embeds that generated document and uses that as the final example. In order to use HyDE, we therefore need to provide a base embedding model, as well as an LLMChain that can be used to generate those documents. By default, the HyDE class comes with some default prompts to use (see the paper for more details on them), but we can also create our own.from langchain.llms import OpenAIfrom langchain.embeddings import OpenAIEmbeddingsfrom
https://python.langchain.com/docs/modules/chains/additional/hyde
176414394179-2
own.from langchain.llms import OpenAIfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.chains import LLMChain, HypotheticalDocumentEmbedderfrom langchain.prompts import PromptTemplatebase_embeddings = OpenAIEmbeddings()llm = OpenAI()# Load with `web_search` promptembeddings = HypotheticalDocumentEmbedder.from_llm(llm, base_embeddings, "web_search")# Now we can use it as any embedding class!result = embeddings.embed_query("Where is the Taj Mahal?")Multiple generations​We can also generate multiple documents and then combine the embeddings for those. By default, we combine those by taking the average. We can do this by changing the LLM we use to generate documents to return multiple things.multi_llm = OpenAI(n=4, best_of=4)embeddings = HypotheticalDocumentEmbedder.from_llm( multi_llm, base_embeddings, "web_search")result = embeddings.embed_query("Where is the Taj Mahal?")Using our own prompts​Besides using preconfigured prompts, we can also easily construct our own prompts and use those in the LLMChain that is generating the documents. This can be useful if we know the domain our queries will be in, as we can condition the prompt to generate text more similar to that.In the example below, let's condition it to generate text about a state of the union address (because we will use that in the next example).prompt_template = """Please answer the user's question about the most recent state of the union addressQuestion: {question}Answer:"""prompt = PromptTemplate(input_variables=["question"], template=prompt_template)llm_chain = LLMChain(llm=llm, prompt=prompt)embeddings = HypotheticalDocumentEmbedder( llm_chain=llm_chain, base_embeddings=base_embeddings)result =
https://python.langchain.com/docs/modules/chains/additional/hyde
176414394179-3
llm_chain=llm_chain, base_embeddings=base_embeddings)result = embeddings.embed_query( "What did the president say about Ketanji Brown Jackson")Using HyDE​Now that we have HyDE, we can use it as we would any other embedding class! Here is using it to find similar passages in the state of the union example.from langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Chromawith open("../../state_of_the_union.txt") as f: state_of_the_union = f.read()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)docsearch = Chroma.from_texts(texts, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient.print(docs[0].page_content) In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities
https://python.langchain.com/docs/modules/chains/additional/hyde
176414394179-4
you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.PreviousGraphSparqlQAChainNextBash chainMultiple generationsUsing our own promptsUsing HyDECommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/additional/hyde
a54841183ae0-0
Retrieval QA using OpenAI functions | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/additional/openai_functions_retrieval_qa
a54841183ae0-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalRetrieval QA using OpenAI functionsOn this pageRetrieval QA using OpenAI functionsOpenAI functions allows for structuring of response output. This is often useful in question answering when you want to not only get the final answer but also supporting evidence, citations, etc.In this notebook we show how to use an LLM chain which uses OpenAI functions as part of an overall retrieval pipeline.from langchain.chains import RetrievalQAfrom langchain.document_loaders import TextLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Chromaloader = TextLoader("../../state_of_the_union.txt", encoding="utf-8")documents = loader.load()text_splitter =
https://python.langchain.com/docs/modules/chains/additional/openai_functions_retrieval_qa
a54841183ae0-2
encoding="utf-8")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)for i, text in enumerate(texts): text.metadata["source"] = f"{i}-pl"embeddings = OpenAIEmbeddings()docsearch = Chroma.from_documents(texts, embeddings)from langchain.chat_models import ChatOpenAIfrom langchain.chains.combine_documents.stuff import StuffDocumentsChainfrom langchain.prompts import PromptTemplatefrom langchain.chains import create_qa_with_sources_chainllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")qa_chain = create_qa_with_sources_chain(llm)doc_prompt = PromptTemplate( template="Content: {page_content}\nSource: {source}", input_variables=["page_content", "source"],)final_qa_chain = StuffDocumentsChain( llm_chain=qa_chain, document_variable_name="context", document_prompt=doc_prompt,)retrieval_qa = RetrievalQA( retriever=docsearch.as_retriever(), combine_documents_chain=final_qa_chain)query = "What did the president say about russia"retrieval_qa.run(query) '{\n "answer": "The President expressed strong condemnation of Russia\'s actions in Ukraine and announced measures to isolate Russia and provide support to Ukraine. He stated that Russia\'s invasion of Ukraine will have long-term consequences for Russia and emphasized the commitment to defend NATO countries. The President also mentioned taking robust action through sanctions and releasing oil reserves to mitigate gas prices. Overall, the President conveyed a message of solidarity with Ukraine and determination to protect American interests.",\n "sources": ["0-pl", "4-pl",
https://python.langchain.com/docs/modules/chains/additional/openai_functions_retrieval_qa
a54841183ae0-3
determination to protect American interests.",\n "sources": ["0-pl", "4-pl", "5-pl", "6-pl"]\n}'Using Pydantic​If we want to, we can set the chain to return in Pydantic. Note that if downstream chains consume the output of this chain - including memory - they will generally expect it to be in string format, so you should only use this chain when it is the final chain.qa_chain_pydantic = create_qa_with_sources_chain(llm, output_parser="pydantic")final_qa_chain_pydantic = StuffDocumentsChain( llm_chain=qa_chain_pydantic, document_variable_name="context", document_prompt=doc_prompt,)retrieval_qa_pydantic = RetrievalQA( retriever=docsearch.as_retriever(), combine_documents_chain=final_qa_chain_pydantic)retrieval_qa_pydantic.run(query) AnswerWithSources(answer="The President expressed strong condemnation of Russia's actions in Ukraine and announced measures to isolate Russia and provide support to Ukraine. He stated that Russia's invasion of Ukraine will have long-term consequences for Russia and emphasized the commitment to defend NATO countries. The President also mentioned taking robust action through sanctions and releasing oil reserves to mitigate gas prices. Overall, the President conveyed a message of solidarity with Ukraine and determination to protect American interests.", sources=['0-pl', '4-pl', '5-pl', '6-pl'])Using in ConversationalRetrievalChain​We can also show what it's like to use this in the ConversationalRetrievalChain. Note that because this chain involves memory, we will NOT use the Pydantic return type.from langchain.chains import ConversationalRetrievalChainfrom langchain.memory import ConversationBufferMemoryfrom langchain.chains import LLMChainmemory =
https://python.langchain.com/docs/modules/chains/additional/openai_functions_retrieval_qa
a54841183ae0-4
langchain.memory import ConversationBufferMemoryfrom langchain.chains import LLMChainmemory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\Make sure to avoid using any unclear pronouns.Chat History:{chat_history}Follow Up Input: {question}Standalone question:"""CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)condense_question_chain = LLMChain( llm=llm, prompt=CONDENSE_QUESTION_PROMPT,)qa = ConversationalRetrievalChain( question_generator=condense_question_chain, retriever=docsearch.as_retriever(), memory=memory, combine_docs_chain=final_qa_chain,)query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query})result {'question': 'What did the president say about Ketanji Brown Jackson', 'chat_history': [HumanMessage(content='What did the president say about Ketanji Brown Jackson', additional_kwargs={}, example=False), AIMessage(content='{\n "answer": "The President nominated Ketanji Brown Jackson as a Circuit Court of Appeals Judge and praised her as one of the nation\'s top legal minds who will continue Justice Breyer\'s legacy of excellence.",\n "sources": ["31-pl"]\n}', additional_kwargs={}, example=False)], 'answer': '{\n "answer": "The President nominated Ketanji Brown Jackson as a Circuit Court of Appeals Judge and praised her as one of the nation\'s top legal minds who will continue Justice Breyer\'s legacy of excellence.",\n "sources":
https://python.langchain.com/docs/modules/chains/additional/openai_functions_retrieval_qa
a54841183ae0-5
minds who will continue Justice Breyer\'s legacy of excellence.",\n "sources": ["31-pl"]\n}'}query = "what did he say about her predecessor?"result = qa({"question": query})result {'question': 'what did he say about her predecessor?', 'chat_history': [HumanMessage(content='What did the president say about Ketanji Brown Jackson', additional_kwargs={}, example=False), AIMessage(content='{\n "answer": "The President nominated Ketanji Brown Jackson as a Circuit Court of Appeals Judge and praised her as one of the nation\'s top legal minds who will continue Justice Breyer\'s legacy of excellence.",\n "sources": ["31-pl"]\n}', additional_kwargs={}, example=False), HumanMessage(content='what did he say about her predecessor?', additional_kwargs={}, example=False), AIMessage(content='{\n "answer": "The President honored Justice Stephen Breyer for his service as an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court.",\n "sources": ["31-pl"]\n}', additional_kwargs={}, example=False)], 'answer': '{\n "answer": "The President honored Justice Stephen Breyer for his service as an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court.",\n "sources": ["31-pl"]\n}'}Using your own output schema​We can change the outputs of our chain by passing in our own schema. The values and descriptions of this schema will inform the function we pass to the OpenAI API, meaning it won't just affect how we parse outputs but will also change the OpenAI output itself. For example we can add a countries_referenced parameter to our schema and describe what we want this parameter to
https://python.langchain.com/docs/modules/chains/additional/openai_functions_retrieval_qa
a54841183ae0-6
example we can add a countries_referenced parameter to our schema and describe what we want this parameter to mean, and that'll cause the OpenAI output to include a description of a speaker in the response.In addition to the previous example, we can also add a custom prompt to the chain. This will allow you to add additional context to the response, which can be useful for question answering.from typing import Listfrom pydantic import BaseModel, Fieldfrom langchain.chains.openai_functions import create_qa_with_structure_chainfrom langchain.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplatefrom langchain.schema import SystemMessage, HumanMessageclass CustomResponseSchema(BaseModel): """An answer to the question being asked, with sources.""" answer: str = Field(..., description="Answer to the question that was asked") countries_referenced: List[str] = Field( ..., description="All of the countries mentioned in the sources" ) sources: List[str] = Field( ..., description="List of sources used to answer the question" )prompt_messages = [ SystemMessage( content=( "You are a world class algorithm to answer " "questions in a specific format." ) ), HumanMessage(content="Answer question using the following context"), HumanMessagePromptTemplate.from_template("{context}"), HumanMessagePromptTemplate.from_template("Question: {question}"), HumanMessage( content="Tips: Make sure to answer in the correct format. Return all of the countries mentioned in the sources in uppercase characters." ),]chain_prompt
https://python.langchain.com/docs/modules/chains/additional/openai_functions_retrieval_qa
a54841183ae0-7
Return all of the countries mentioned in the sources in uppercase characters." ),]chain_prompt = ChatPromptTemplate(messages=prompt_messages)qa_chain_pydantic = create_qa_with_structure_chain( llm, CustomResponseSchema, output_parser="pydantic", prompt=chain_prompt)final_qa_chain_pydantic = StuffDocumentsChain( llm_chain=qa_chain_pydantic, document_variable_name="context", document_prompt=doc_prompt,)retrieval_qa_pydantic = RetrievalQA( retriever=docsearch.as_retriever(), combine_documents_chain=final_qa_chain_pydantic)query = "What did he say about russia"retrieval_qa_pydantic.run(query) CustomResponseSchema(answer="He announced that American airspace will be closed off to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The Ruble has lost 30% of its value and the Russian stock market has lost 40% of its value. He also mentioned that Putin alone is to blame for Russia's reeling economy. The United States and its allies are providing support to Ukraine in their fight for freedom, including military, economic, and humanitarian assistance. The United States is giving more than $1 billion in direct assistance to Ukraine. He made it clear that American forces are not engaged and will not engage in conflict with Russian forces in Ukraine, but they are deployed to defend NATO allies in case Putin decides to keep moving west. He also mentioned that Putin's attack on Ukraine was premeditated and unprovoked, and that the West and NATO responded by building a coalition of freedom-loving nations to confront Putin. The free world is holding Putin accountable through powerful economic sanctions, cutting off Russia's largest banks from the international financial system, and preventing Russia's central bank from defending the Russian Ruble. The
https://python.langchain.com/docs/modules/chains/additional/openai_functions_retrieval_qa
a54841183ae0-8
from the international financial system, and preventing Russia's central bank from defending the Russian Ruble. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs.", countries_referenced=['AMERICA', 'RUSSIA', 'UKRAINE'], sources=['4-pl', '5-pl', '2-pl', '3-pl'])PreviousNeptune Open Cypher QA ChainNextOpenAPI chainUsing PydanticUsing in ConversationalRetrievalChainUsing your own output schemaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/additional/openai_functions_retrieval_qa
686d651dfe1e-0
ArangoDB QA chain | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/additional/graph_arangodb_qa
686d651dfe1e-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalArangoDB QA chainOn this pageArangoDB QA chainThis notebook shows how to use LLMs to provide a natural language interface to an ArangoDB database.You can get a local ArangoDB instance running via the ArangoDB Docker image: docker run -p 8529:8529 -e ARANGO_ROOT_PASSWORD= arangodb/arangodbAn alternative is to use the ArangoDB Cloud Connector package to get a temporary cloud instance running:pip install python-arango # The ArangoDB Python Driverpip install adb-cloud-connector # The ArangoDB Cloud Instance provisionerpip install openaipip install langchain# Instantiate ArangoDB Databaseimport jsonfrom arango import ArangoClientfrom adb_cloud_connector import get_temp_credentialscon =
https://python.langchain.com/docs/modules/chains/additional/graph_arangodb_qa
686d651dfe1e-2
Databaseimport jsonfrom arango import ArangoClientfrom adb_cloud_connector import get_temp_credentialscon = get_temp_credentials()db = ArangoClient(hosts=con["url"]).db( con["dbName"], con["username"], con["password"], verify=True)print(json.dumps(con, indent=2)) Log: requesting new credentials... Succcess: new credentials acquired { "dbName": "TUT3sp29s3pjf1io0h4cfdsq", "username": "TUTo6nkwgzkizej3kysgdyeo8", "password": "TUT9vx0qjqt42i9bq8uik4v9", "hostname": "tutorials.arangodb.cloud", "port": 8529, "url": "https://tutorials.arangodb.cloud:8529" }# Instantiate the ArangoDB-LangChain Graphfrom langchain.graphs import ArangoGraphgraph = ArangoGraph(db)Populating the Database​We will rely on the Python Driver to import our GameOfThrones data into our database.if db.has_graph("GameOfThrones"): db.delete_graph("GameOfThrones", drop_collections=True)db.create_graph( "GameOfThrones", edge_definitions=[ { "edge_collection": "ChildOf", "from_vertex_collections": ["Characters"], "to_vertex_collections": ["Characters"], }, ],)documents = [
https://python.langchain.com/docs/modules/chains/additional/graph_arangodb_qa
686d651dfe1e-3
["Characters"], }, ],)documents = [ { "_key": "NedStark", "name": "Ned", "surname": "Stark", "alive": True, "age": 41, "gender": "male", }, { "_key": "CatelynStark", "name": "Catelyn", "surname": "Stark", "alive": False, "age": 40, "gender": "female", }, { "_key": "AryaStark", "name": "Arya", "surname": "Stark", "alive": True, "age": 11, "gender": "female", }, { "_key": "BranStark", "name": "Bran", "surname": "Stark", "alive": True, "age": 10, "gender": "male", },]edges = [ {"_to":
https://python.langchain.com/docs/modules/chains/additional/graph_arangodb_qa
686d651dfe1e-4
"male", },]edges = [ {"_to": "Characters/NedStark", "_from": "Characters/AryaStark"}, {"_to": "Characters/NedStark", "_from": "Characters/BranStark"}, {"_to": "Characters/CatelynStark", "_from": "Characters/AryaStark"}, {"_to": "Characters/CatelynStark", "_from": "Characters/BranStark"},]db.collection("Characters").import_bulk(documents)db.collection("ChildOf").import_bulk(edges) {'error': False, 'created': 4, 'errors': 0, 'empty': 0, 'updated': 0, 'ignored': 0, 'details': []}Getting & Setting the ArangoDB Schema​An initial ArangoDB Schema is generated upon instantiating the ArangoDBGraph object. Below are the schema's getter & setter methods should you be interested in viewing or modifying the schema:# The schema should be empty here,# since `graph` was initialized prior to ArangoDB Data ingestion (see above).import jsonprint(json.dumps(graph.schema, indent=4)) { "Graph Schema": [], "Collection Schema": [] }graph.set_schema()# We can now view the generated schemaimport jsonprint(json.dumps(graph.schema, indent=4)) { "Graph Schema": [ { "graph_name": "GameOfThrones",
https://python.langchain.com/docs/modules/chains/additional/graph_arangodb_qa
686d651dfe1e-5
"graph_name": "GameOfThrones", "edge_definitions": [ { "edge_collection": "ChildOf", "from_vertex_collections": [ "Characters" ], "to_vertex_collections": [ "Characters" ] } ] } ], "Collection Schema": [ { "collection_name": "ChildOf", "collection_type": "edge", "edge_properties": [
https://python.langchain.com/docs/modules/chains/additional/graph_arangodb_qa
686d651dfe1e-6
"edge_properties": [ { "name": "_key", "type": "str" }, { "name": "_id", "type": "str" }, { "name": "_from", "type": "str" }, { "name": "_to", "type": "str"
https://python.langchain.com/docs/modules/chains/additional/graph_arangodb_qa
686d651dfe1e-7
"type": "str" }, { "name": "_rev", "type": "str" } ], "example_edge": { "_key": "266218884025", "_id": "ChildOf/266218884025", "_from": "Characters/AryaStark", "_to": "Characters/NedStark", "_rev": "_gVPKGSq---" } }, { "collection_name": "Characters", "collection_type": "document",
https://python.langchain.com/docs/modules/chains/additional/graph_arangodb_qa
686d651dfe1e-8
"collection_type": "document", "document_properties": [ { "name": "_key", "type": "str" }, { "name": "_id", "type": "str" }, { "name": "_rev", "type": "str" }, { "name": "name",
https://python.langchain.com/docs/modules/chains/additional/graph_arangodb_qa
686d651dfe1e-9
"type": "str" }, { "name": "surname", "type": "str" }, { "name": "alive", "type": "bool" }, { "name": "age", "type": "int" }, { "name": "gender",
https://python.langchain.com/docs/modules/chains/additional/graph_arangodb_qa
686d651dfe1e-10
"name": "gender", "type": "str" } ], "example_document": { "_key": "NedStark", "_id": "Characters/NedStark", "_rev": "_gVPKGPi---", "name": "Ned", "surname": "Stark", "alive": true, "age": 41, "gender": "male" } } ] }Querying the ArangoDB Database​We can now use the ArangoDB Graph QA Chain to inquire about our dataimport osos.environ["OPENAI_API_KEY"] = "your-key-here"from langchain.chat_models import
https://python.langchain.com/docs/modules/chains/additional/graph_arangodb_qa
686d651dfe1e-11
= "your-key-here"from langchain.chat_models import ChatOpenAIfrom langchain.chains import ArangoGraphQAChainchain = ArangoGraphQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("Is Ned Stark alive?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters FOR character IN Characters FILTER character.name == "Ned" AND character.surname == "Stark" RETURN character.alive AQL Result: [True] > Finished chain. 'Yes, Ned Stark is alive.'chain.run("How old is Arya Stark?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters FOR character IN Characters FILTER character.name == "Arya" && character.surname == "Stark" RETURN character.age AQL Result: [11] > Finished chain. 'Arya Stark is 11 years old.'chain.run("Are Arya Stark and Ned Stark related?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters, ChildOf FOR v, e, p IN 1..1 OUTBOUND 'Characters/AryaStark' ChildOf FILTER p.vertices[-1]._key == 'NedStark'
https://python.langchain.com/docs/modules/chains/additional/graph_arangodb_qa
686d651dfe1e-12
FILTER p.vertices[-1]._key == 'NedStark' RETURN p AQL Result: [{'vertices': [{'_key': 'AryaStark', '_id': 'Characters/AryaStark', '_rev': '_gVPKGPi--B', 'name': 'Arya', 'surname': 'Stark', 'alive': True, 'age': 11, 'gender': 'female'}, {'_key': 'NedStark', '_id': 'Characters/NedStark', '_rev': '_gVPKGPi---', 'name': 'Ned', 'surname': 'Stark', 'alive': True, 'age': 41, 'gender': 'male'}], 'edges': [{'_key': '266218884025', '_id': 'ChildOf/266218884025', '_from': 'Characters/AryaStark', '_to': 'Characters/NedStark', '_rev': '_gVPKGSq---'}], 'weights': [0, 1]}] > Finished chain. 'Yes, Arya Stark and Ned Stark are related. According to the information retrieved from the database, there is a relationship between them. Arya Stark is the child of Ned Stark.'chain.run("Does Arya Stark have a dead parent?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters, ChildOf FOR v, e IN 1..1 OUTBOUND 'Characters/AryaStark' ChildOf FILTER v.alive == false RETURN e AQL Result: [{'_key':
https://python.langchain.com/docs/modules/chains/additional/graph_arangodb_qa
686d651dfe1e-13
RETURN e AQL Result: [{'_key': '266218884027', '_id': 'ChildOf/266218884027', '_from': 'Characters/AryaStark', '_to': 'Characters/CatelynStark', '_rev': '_gVPKGSu---'}] > Finished chain. 'Yes, Arya Stark has a dead parent. The parent is Catelyn Stark.'Chain Modifiers​You can alter the values of the following ArangoDBGraphQAChain class variables to modify the behaviour of your chain results# Specify the maximum number of AQL Query Results to returnchain.top_k = 10# Specify whether or not to return the AQL Query in the output dictionarychain.return_aql_query = True# Specify whether or not to return the AQL JSON Result in the output dictionarychain.return_aql_result = True# Specify the maximum amount of AQL Generation attempts that should be madechain.max_aql_generation_attempts = 5# Specify a set of AQL Query Examples, which are passed to# the AQL Generation Prompt Template to promote few-shot-learning.# Defaults to an empty string.chain.aql_examples = """# Is Ned Stark alive?RETURN DOCUMENT('Characters/NedStark').alive# Is Arya Stark the child of Ned Stark?FOR e IN ChildOf FILTER e._from == "Characters/AryaStark" AND e._to == "Characters/NedStark" RETURN e"""chain.run("Is Ned Stark alive?")# chain("Is Ned Stark alive?") # Returns a dictionary with the AQL Query & AQL Result > Entering new ArangoGraphQAChain chain... AQL Query (1): RETURN
https://python.langchain.com/docs/modules/chains/additional/graph_arangodb_qa
686d651dfe1e-14
ArangoGraphQAChain chain... AQL Query (1): RETURN DOCUMENT('Characters/NedStark').alive AQL Result: [True] > Finished chain. 'Yes, according to the information in the database, Ned Stark is alive.'chain.run("Is Bran Stark the child of Ned Stark?") > Entering new ArangoGraphQAChain chain... AQL Query (1): FOR e IN ChildOf FILTER e._from == "Characters/BranStark" AND e._to == "Characters/NedStark" RETURN e AQL Result: [{'_key': '266218884026', '_id': 'ChildOf/266218884026', '_from': 'Characters/BranStark', '_to': 'Characters/NedStark', '_rev': '_gVPKGSq--_'}] > Finished chain. 'Yes, according to the information in the ArangoDB database, Bran Stark is indeed the child of Ned Stark.'PreviousFLARENextGraph DB QA chainPopulating the DatabaseGetting & Setting the ArangoDB SchemaQuerying the ArangoDB DatabaseChain ModifiersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/additional/graph_arangodb_qa
5d0b647175fa-0
Self-checking chain | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/additional/llm_checker
5d0b647175fa-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalSelf-checking chainSelf-checking chainThis notebook showcases how to use LLMCheckerChain.from langchain.chains import LLMCheckerChainfrom langchain.llms import OpenAIllm = OpenAI(temperature=0.7)text = "What type of mammal lays the biggest eggs?"checker_chain = LLMCheckerChain.from_llm(llm, verbose=True)checker_chain.run(text) > Entering new LLMCheckerChain chain... > Entering new SequentialChain chain... > Finished chain. > Finished chain. ' No mammal lays the biggest eggs. The Elephant Bird, which
https://python.langchain.com/docs/modules/chains/additional/llm_checker
5d0b647175fa-2
Finished chain. ' No mammal lays the biggest eggs. The Elephant Bird, which was a species of giant bird, laid the largest eggs of any bird.'PreviousBash chainNextMath chainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/additional/llm_checker
7912408f4eb4-0
LLM Symbolic Math | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/additional/llm_symbolic_math
7912408f4eb4-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalLLM Symbolic MathOn this pageLLM Symbolic MathThis notebook showcases using LLMs and Python to Solve Algebraic Equations. Under the hood is makes use of SymPy.from langchain.llms import OpenAIfrom langchain.chains.llm_symbolic_math.base import LLMSymbolicMathChainllm = OpenAI(temperature=0)llm_symbolic_math = LLMSymbolicMathChain.from_llm(llm)Integrals and derivates​llm_symbolic_math.run("What is the derivative of sin(x)*exp(x) with respect to x?") 'Answer: exp(x)*sin(x) + exp(x)*cos(x)'llm_symbolic_math.run( "What is the integral of
https://python.langchain.com/docs/modules/chains/additional/llm_symbolic_math
7912408f4eb4-2
"What is the integral of exp(x)*sin(x) + exp(x)*cos(x) with respect to x?") 'Answer: exp(x)*sin(x)'Solve linear and differential equations​llm_symbolic_math.run('Solve the differential equation y" - y = e^t') 'Answer: Eq(y(t), C2*exp(-t) + (C1 + t/2)*exp(t))'llm_symbolic_math.run("What are the solutions to this equation y^3 + 1/3y?") 'Answer: {0, -sqrt(3)*I/3, sqrt(3)*I/3}'llm_symbolic_math.run("x = y + 5, y = z - 3, z = x * y. Solve for x, y, z") 'Answer: (3 - sqrt(7), -sqrt(7) - 2, 1 - sqrt(7)), (sqrt(7) + 3, -2 + sqrt(7), 1 + sqrt(7))'PreviousSummarization checker chainNextModerationIntegrals and derivatesSolve linear and differential equationsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/additional/llm_symbolic_math
c340ec6e138d-0
KuzuQAChain | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/additional/graph_kuzu_qa
c340ec6e138d-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalKuzuQAChainOn this pageKuzuQAChainThis notebook shows how to use LLMs to provide a natural language interface to Kùzu database.Kùzu is an in-process property graph database management system. You can simply install it with pip:pip install kuzuOnce installed, you can simply import it and start creating a database on the local machine and connect to it:import kuzudb = kuzu.Database("test_db")conn = kuzu.Connection(db)First, we create the schema for a simple movie database:conn.execute("CREATE NODE TABLE Movie (name STRING, PRIMARY KEY(name))")conn.execute( "CREATE NODE TABLE Person (name STRING, birthDate STRING, PRIMARY KEY(name))")conn.execute("CREATE REL TABLE ActedIn
https://python.langchain.com/docs/modules/chains/additional/graph_kuzu_qa
c340ec6e138d-2
STRING, birthDate STRING, PRIMARY KEY(name))")conn.execute("CREATE REL TABLE ActedIn (FROM Person TO Movie)") <kuzu.query_result.QueryResult at 0x1066ff410>Then we can insert some data.conn.execute("CREATE (:Person {name: 'Al Pacino', birthDate: '1940-04-25'})")conn.execute("CREATE (:Person {name: 'Robert De Niro', birthDate: '1943-08-17'})")conn.execute("CREATE (:Movie {name: 'The Godfather'})")conn.execute("CREATE (:Movie {name: 'The Godfather: Part II'})")conn.execute( "CREATE (:Movie {name: 'The Godfather Coda: The Death of Michael Corleone'})")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather' CREATE (p)-[:ActedIn]->(m)")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather: Part II' CREATE (p)-[:ActedIn]->(m)")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather Coda: The Death of Michael Corleone' CREATE (p)-[:ActedIn]->(m)")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Robert De Niro' AND m.name = 'The Godfather: Part II' CREATE (p)-[:ActedIn]->(m)") <kuzu.query_result.QueryResult at 0x107016210>Creating
https://python.langchain.com/docs/modules/chains/additional/graph_kuzu_qa
c340ec6e138d-3
<kuzu.query_result.QueryResult at 0x107016210>Creating KuzuQAChain​We can now create the KuzuGraph and KuzuQAChain. To create the KuzuGraph we simply need to pass the database object to the KuzuGraph constructor.from langchain.chat_models import ChatOpenAIfrom langchain.graphs import KuzuGraphfrom langchain.chains import KuzuQAChaingraph = KuzuGraph(db)chain = KuzuQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph, verbose=True)Refresh graph schema information​If the schema of database changes, you can refresh the schema information needed to generate Cypher statements.# graph.refresh_schema()print(graph.get_schema) Node properties: [{'properties': [('name', 'STRING')], 'label': 'Movie'}, {'properties': [('name', 'STRING'), ('birthDate', 'STRING')], 'label': 'Person'}] Relationships properties: [{'properties': [], 'label': 'ActedIn'}] Relationships: ['(:Person)-[:ActedIn]->(:Movie)'] Querying the graph​We can now use the KuzuQAChain to ask question of the graphchain.run("Who played in The Godfather: Part II?") > Entering new chain... Generated Cypher: MATCH (p:Person)-[:ActedIn]->(m:Movie {name: 'The Godfather: Part II'}) RETURN p.name Full Context: [{'p.name': 'Al Pacino'}, {'p.name': 'Robert De Niro'}] > Finished chain. 'Al Pacino and Robert De Niro both played in The Godfather: Part
https://python.langchain.com/docs/modules/chains/additional/graph_kuzu_qa
c340ec6e138d-4
'Al Pacino and Robert De Niro both played in The Godfather: Part II.'chain.run("Robert De Niro played in which movies?") > Entering new chain... Generated Cypher: MATCH (p:Person {name: 'Robert De Niro'})-[:ActedIn]->(m:Movie) RETURN m.name Full Context: [{'m.name': 'The Godfather: Part II'}] > Finished chain. 'Robert De Niro played in The Godfather: Part II.'chain.run("Robert De Niro is born in which year?") > Entering new chain... Generated Cypher: MATCH (p:Person {name: 'Robert De Niro'})-[:ActedIn]->(m:Movie) RETURN p.birthDate Full Context: [{'p.birthDate': '1943-08-17'}] > Finished chain. 'Robert De Niro was born on August 17, 1943.'chain.run("Who is the oldest actor who played in The Godfather: Part II?") > Entering new chain... Generated Cypher: MATCH (p:Person)-[:ActedIn]->(m:Movie{name:'The Godfather: Part II'}) WITH p, m, p.birthDate AS birthDate ORDER BY birthDate ASC LIMIT 1 RETURN p.name Full Context: [{'p.name': 'Al Pacino'}]
https://python.langchain.com/docs/modules/chains/additional/graph_kuzu_qa
c340ec6e138d-5
Context: [{'p.name': 'Al Pacino'}] > Finished chain. 'The oldest actor who played in The Godfather: Part II is Al Pacino.'PreviousHugeGraph QA ChainNextNebulaGraphQAChainCreating KuzuQAChainRefresh graph schema informationQuerying the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/additional/graph_kuzu_qa
16eb4cec3950-0
GraphSparqlQAChain | 🦜�🔗 Langchain
https://python.langchain.com/docs/modules/chains/additional/graph_sparql_qa
16eb4cec3950-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalGraphSparqlQAChainOn this pageGraphSparqlQAChainGraph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\
https://python.langchain.com/docs/modules/chains/additional/graph_sparql_qa
16eb4cec3950-2
Disclaimer: To date, SPARQL query generation via LLMs is still a bit unstable. Be especially careful with UPDATE queries, which alter the graph.There are several sources you can run queries against, including files on the web, files you have available locally, SPARQL endpoints, e.g., Wikidata, and triple stores.from langchain.chat_models import ChatOpenAIfrom langchain.chains import GraphSparqlQAChainfrom langchain.graphs import RdfGraphgraph = RdfGraph( source_file="http://www.w3.org/People/Berners-Lee/card", standard="rdf", local_copy="test.ttl",)Note that providing a local_file is necessary for storing changes locally if the source is read-only.Refresh graph schema information​If the schema of the database changes, you can refresh the schema information needed to generate SPARQL queries.graph.load_schema()graph.get_schema In the following, each IRI is followed by the local name and optionally its description in parentheses. The RDF graph supports the following node types: <http://xmlns.com/foaf/0.1/PersonalProfileDocument> (PersonalProfileDocument, None), <http://www.w3.org/ns/auth/cert#RSAPublicKey> (RSAPublicKey, None), <http://www.w3.org/2000/10/swap/pim/contact#Male> (Male, None), <http://xmlns.com/foaf/0.1/Person> (Person, None), <http://www.w3.org/2006/vcard/ns#Work> (Work, None) The RDF graph supports the following relationships: <http://www.w3.org/2000/01/rdf-schema#seeAlso> (seeAlso, None),
https://python.langchain.com/docs/modules/chains/additional/graph_sparql_qa
16eb4cec3950-3
(seeAlso, None), <http://purl.org/dc/elements/1.1/title> (title, None), <http://xmlns.com/foaf/0.1/mbox_sha1sum> (mbox_sha1sum, None), <http://xmlns.com/foaf/0.1/maker> (maker, None), <http://www.w3.org/ns/solid/terms#oidcIssuer> (oidcIssuer, None), <http://www.w3.org/2000/10/swap/pim/contact#publicHomePage> (publicHomePage, None), <http://xmlns.com/foaf/0.1/openid> (openid, None), <http://www.w3.org/ns/pim/space#storage> (storage, None), <http://xmlns.com/foaf/0.1/name> (name, None), <http://www.w3.org/2000/10/swap/pim/contact#country> (country, None), <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> (type, None), <http://www.w3.org/ns/solid/terms#profileHighlightColor> (profileHighlightColor, None), <http://www.w3.org/ns/pim/space#preferencesFile> (preferencesFile, None), <http://www.w3.org/2000/01/rdf-schema#label> (label, None), <http://www.w3.org/ns/auth/cert#modulus> (modulus, None), <http://www.w3.org/2000/10/swap/pim/contact#participant> (participant, None), <http://www.w3.org/2000/10/swap/pim/contact#street2> (street2, None), <http://www.w3.org/2006/vcard/ns#locality> (locality, None),
https://python.langchain.com/docs/modules/chains/additional/graph_sparql_qa
16eb4cec3950-4
(locality, None), <http://xmlns.com/foaf/0.1/nick> (nick, None), <http://xmlns.com/foaf/0.1/homepage> (homepage, None), <http://creativecommons.org/ns#license> (license, None), <http://xmlns.com/foaf/0.1/givenname> (givenname, None), <http://www.w3.org/2006/vcard/ns#street-address> (street-address, None), <http://www.w3.org/2006/vcard/ns#postal-code> (postal-code, None), <http://www.w3.org/2000/10/swap/pim/contact#street> (street, None), <http://www.w3.org/2003/01/geo/wgs84_pos#lat> (lat, None), <http://xmlns.com/foaf/0.1/primaryTopic> (primaryTopic, None), <http://www.w3.org/2006/vcard/ns#fn> (fn, None), <http://www.w3.org/2003/01/geo/wgs84_pos#location> (location, None), <http://usefulinc.com/ns/doap#developer> (developer, None), <http://www.w3.org/2000/10/swap/pim/contact#city> (city, None), <http://www.w3.org/2006/vcard/ns#region> (region, None), <http://xmlns.com/foaf/0.1/member> (member, None), <http://www.w3.org/2003/01/geo/wgs84_pos#long> (long, None), <http://www.w3.org/2000/10/swap/pim/contact#address> (address, None), <http://xmlns.com/foaf/0.1/family_name>
https://python.langchain.com/docs/modules/chains/additional/graph_sparql_qa
16eb4cec3950-5
None), <http://xmlns.com/foaf/0.1/family_name> (family_name, None), <http://xmlns.com/foaf/0.1/account> (account, None), <http://xmlns.com/foaf/0.1/workplaceHomepage> (workplaceHomepage, None), <http://purl.org/dc/terms/title> (title, None), <http://www.w3.org/ns/solid/terms#publicTypeIndex> (publicTypeIndex, None), <http://www.w3.org/2000/10/swap/pim/contact#office> (office, None), <http://www.w3.org/2000/10/swap/pim/contact#homePage> (homePage, None), <http://xmlns.com/foaf/0.1/mbox> (mbox, None), <http://www.w3.org/2000/10/swap/pim/contact#preferredURI> (preferredURI, None), <http://www.w3.org/ns/solid/terms#profileBackgroundColor> (profileBackgroundColor, None), <http://schema.org/owns> (owns, None), <http://xmlns.com/foaf/0.1/based_near> (based_near, None), <http://www.w3.org/2006/vcard/ns#hasAddress> (hasAddress, None), <http://xmlns.com/foaf/0.1/img> (img, None), <http://www.w3.org/2000/10/swap/pim/contact#assistant> (assistant, None), <http://xmlns.com/foaf/0.1/title> (title, None), <http://www.w3.org/ns/auth/cert#key> (key, None), <http://www.w3.org/ns/ldp#inbox> (inbox, None), <http://www.w3.org/ns/solid/terms#editableProfile>
https://python.langchain.com/docs/modules/chains/additional/graph_sparql_qa
16eb4cec3950-6
(inbox, None), <http://www.w3.org/ns/solid/terms#editableProfile> (editableProfile, None), <http://www.w3.org/2000/10/swap/pim/contact#postalCode> (postalCode, None), <http://xmlns.com/foaf/0.1/weblog> (weblog, None), <http://www.w3.org/ns/auth/cert#exponent> (exponent, None), <http://rdfs.org/sioc/ns#avatar> (avatar, None) Querying the graph​Now, you can use the graph SPARQL QA chain to ask questions about the graph.chain = GraphSparqlQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("What is Tim Berners-Lee's work homepage?") > Entering new GraphSparqlQAChain chain... Identified intent: SELECT Generated SPARQL: PREFIX foaf: <http://xmlns.com/foaf/0.1/> SELECT ?homepage WHERE { ?person foaf:name "Tim Berners-Lee" . ?person foaf:workplaceHomepage ?homepage . } Full Context: [] > Finished chain. "Tim Berners-Lee's work homepage is http://www.w3.org/People/Berners-Lee/."Updating the graph​Analogously, you can update the graph, i.e., insert triples, using natural language.chain.run( "Save that the person with the name 'Timothy Berners-Lee' has a work
https://python.langchain.com/docs/modules/chains/additional/graph_sparql_qa
16eb4cec3950-7
"Save that the person with the name 'Timothy Berners-Lee' has a work homepage at 'http://www.w3.org/foo/bar/'") > Entering new GraphSparqlQAChain chain... Identified intent: UPDATE Generated SPARQL: PREFIX foaf: <http://xmlns.com/foaf/0.1/> INSERT { ?person foaf:workplaceHomepage <http://www.w3.org/foo/bar/> . } WHERE { ?person foaf:name "Timothy Berners-Lee" . } > Finished chain. 'Successfully inserted triples into the graph.'Let's verify the results:query = ( """PREFIX foaf: <http://xmlns.com/foaf/0.1/>\n""" """SELECT ?hp\n""" """WHERE {\n""" """ ?person foaf:name "Timothy Berners-Lee" . \n""" """ ?person foaf:workplaceHomepage ?hp .\n""" """}""")graph.query(query) [(rdflib.term.URIRef('https://www.w3.org/'),), (rdflib.term.URIRef('http://www.w3.org/foo/bar/'),)]PreviousGraph QANextHypothetical Document EmbeddingsRefresh graph schema informationQuerying the graphUpdating the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/modules/chains/additional/graph_sparql_qa