id
stringlengths
14
15
text
stringlengths
23
2.21k
source
stringlengths
52
97
366b16df4f7e-0
Diffbot | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/diffbot
366b16df4f7e-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersDiffbotDiffbotUnlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page.
https://python.langchain.com/docs/integrations/document_loaders/diffbot
366b16df4f7e-2
It starts with computer vision, which classifies a page into one of 20 possible types. Content is then interpreted by a machine learning model trained to identify the key attributes on a page based on its type.
https://python.langchain.com/docs/integrations/document_loaders/diffbot
366b16df4f7e-3
The result is a website transformed into clean structured data (like JSON or CSV), ready for your application.This covers how to extract HTML documents from a list of URLs using the Diffbot extract API, into a document format that we can use downstream.urls = [ "https://python.langchain.com/en/latest/index.html",]The Diffbot Extract API Requires an API token. Once you have it, you can extract the data.Read instructions how to get the Diffbot API Token.import osfrom langchain.document_loaders import DiffbotLoaderloader = DiffbotLoader(urls=urls, api_token=os.environ.get("DIFFBOT_API_TOKEN"))With the .load() method, you can see the documents loadedloader.load() [Document(page_content='LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\nBe data-aware: connect a language model to other sources of data\nBe agentic: allow a language model to interact with its environment\nThe LangChain framework is designed with the above principles in mind.\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\nGetting Started\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\nGetting Started Documentation\nModules\nThere are several main modules that LangChain provides support for. For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides. These modules are, in increasing order of complexity:\nModels: The various model types and model integrations LangChain supports.\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for
https://python.langchain.com/docs/integrations/document_loaders/diffbot
366b16df4f7e-4
concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\nUse Cases\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\nExtraction: Extract structured information from text.\nSummarization: Summarizing longer documents into
https://python.langchain.com/docs/integrations/document_loaders/diffbot
366b16df4f7e-5
Extract structured information from text.\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\nReference Docs\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\nReference Documentation\nLangChain Ecosystem\nGuides for how other companies/products can be used with LangChain\nLangChain Ecosystem\nAdditional Resources\nAdditional collection of resources we think may be useful as you develop your application!\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\nDiscord: Join us on our Discord to discuss all things LangChain!\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.', metadata={'source': 'https://python.langchain.com/en/latest/index.html'})]PreviousDatadog
https://python.langchain.com/docs/integrations/document_loaders/diffbot
366b16df4f7e-6
'https://python.langchain.com/en/latest/index.html'})]PreviousDatadog LogsNextDiscordCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/diffbot
d6a44d45a54a-0
Copy Paste | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/copypaste
d6a44d45a54a-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersCopy PasteOn this pageCopy PasteThis notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need
https://python.langchain.com/docs/integrations/document_loaders/copypaste
d6a44d45a54a-2
object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly.from langchain.docstore.document import Documenttext = "..... put the text you copy pasted here......"doc = Document(page_content=text)Metadata​If you want to add metadata about the where you got this piece of text, you easily can with the metadata key.metadata = {"source": "internet", "date": "Friday"}doc = Document(page_content=text, metadata=metadata)PreviousCoNLL-UNextCSVMetadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/copypaste
7607a281070b-0
Recursive URL Loader | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/recursive_url_loader
7607a281070b-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersRecursive URL LoaderRecursive URL LoaderWe may want to process load all URLs under a root directory.For example, let's look at the LangChain JS documentation.This has many interesting child
https://python.langchain.com/docs/integrations/document_loaders/recursive_url_loader
7607a281070b-2
a root directory.For example, let's look at the LangChain JS documentation.This has many interesting child pages that we may want to read in bulk.Of course, the WebBaseLoader can load a list of pages. But, the challenge is traversing the tree of child pages and actually assembling that list!We do this using the RecursiveUrlLoader.This also gives us the flexibility to exclude some children (e.g., the api directory with > 800 child pages).from langchain.document_loaders.recursive_url_loader import RecursiveUrlLoaderLet's try a simple example.url = "https://js.langchain.com/docs/modules/memory/examples/"loader = RecursiveUrlLoader(url=url)docs = loader.load()len(docs) 12docs[0].page_content[:50] '\n\n\n\n\nBuffer Window Memory | 🦜�🔗 Langchain\n\n\n\n\n\nSki'docs[0].metadata {'source': 'https://js.langchain.com/docs/modules/memory/examples/buffer_window_memory', 'title': 'Buffer Window Memory | 🦜�🔗 Langchain', 'description': 'BufferWindowMemory keeps track of the back-and-forths in conversation, and then uses a window of size k to surface the last k back-and-forths to use as memory.', 'language': 'en'}Now, let's try a more extensive example, the docs root dir.We will skip everything under api.For this, we can lazy_load each page as we crawl the tree, using WebBaseLoader to load each as we go.url = "https://js.langchain.com/docs/"exclude_dirs = ["https://js.langchain.com/docs/api/"]loader = RecursiveUrlLoader(url=url, exclude_dirs=exclude_dirs)# Lazy load eachdocs = [print(doc) or
https://python.langchain.com/docs/integrations/document_loaders/recursive_url_loader
7607a281070b-3
exclude_dirs=exclude_dirs)# Lazy load eachdocs = [print(doc) or doc for doc in loader.lazy_load()]# Load all pagesdocs = loader.load()len(docs) 188docs[0].page_content[:50] '\n\n\n\n\nAgent Simulations | 🦜�🔗 Langchain\n\n\n\n\n\nSkip t'docs[0].metadata {'source': 'https://js.langchain.com/docs/use_cases/agent_simulations/', 'title': 'Agent Simulations | 🦜�🔗 Langchain', 'description': 'Agent simulations involve taking multiple agents and having them interact with each other.', 'language': 'en'}PreviousReadTheDocs DocumentationNextRedditCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/recursive_url_loader
82d135932257-0
Google Drive | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/google_drive
82d135932257-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersGoogle DriveOn this pageGoogle DriveGoogle Drive is a file storage and synchronization service developed by Google.This notebook covers how to load documents from Google Drive. Currently, only Google Docs
https://python.langchain.com/docs/integrations/document_loaders/google_drive
82d135932257-2
service developed by Google.This notebook covers how to load documents from Google Drive. Currently, only Google Docs are supported.Prerequisites​Create a Google Cloud project or use an existing projectEnable the Google Drive APIAuthorize credentials for desktop apppip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib🧑 Instructions for ingesting your Google Docs data​By default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_path keyword argument. Same thing with token.json - token_path. Note that token.json will be created automatically the first time you use the loader.GoogleDriveLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL:Folder: https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5 -> folder id is "1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5"Document: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is "1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw"pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlibfrom langchain.document_loaders import GoogleDriveLoaderloader = GoogleDriveLoader( folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5", # Optional: configure whether to recursively fetch files from subfolders. Defaults to False. recursive=False,)docs =
https://python.langchain.com/docs/integrations/document_loaders/google_drive
82d135932257-3
to recursively fetch files from subfolders. Defaults to False. recursive=False,)docs = loader.load()When you pass a folder_id by default all files of type document, sheet and pdf are loaded. You can modify this behaviour by passing a file_types argument loader = GoogleDriveLoader( folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5", file_types=["document", "sheet"] recursive=False)Passing in Optional File Loaders​When processing files other than Google Docs and Google Sheets, it can be helpful to pass an optional file loader to GoogleDriveLoader. If you pass in a file loader, that file loader will be used on documents that do not have a Google Docs or Google Sheets MIME type. Here is an example of how to load an Excel document from Google Drive using a file loader. from langchain.document_loaders import GoogleDriveLoaderfrom langchain.document_loaders import UnstructuredFileIOLoaderfile_id = "1x9WBtFPWMEAdjcJzPScRsjpjQvpSo_kz"loader = GoogleDriveLoader( file_ids=[file_id], file_loader_cls=UnstructuredFileIOLoader, file_loader_kwargs={"mode": "elements"},)docs = loader.load()docs[0] Document(page_content='\n \n \n Team\n Location\n Stanley Cups\n \n \n Blues\n STL\n 1\n \n \n Flyers\n PHI\n 2\n
https://python.langchain.com/docs/integrations/document_loaders/google_drive
82d135932257-4
Flyers\n PHI\n 2\n \n \n Maple Leafs\n TOR\n 13\n \n \n', metadata={'filetype': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'page_number': 1, 'page_name': 'Stanley Cups', 'text_as_html': '<table border="1" class="dataframe">\n <tbody>\n <tr>\n <td>Team</td>\n <td>Location</td>\n <td>Stanley Cups</td>\n </tr>\n <tr>\n <td>Blues</td>\n <td>STL</td>\n <td>1</td>\n </tr>\n <tr>\n <td>Flyers</td>\n <td>PHI</td>\n <td>2</td>\n </tr>\n <tr>\n <td>Maple Leafs</td>\n <td>TOR</td>\n <td>13</td>\n </tr>\n </tbody>\n</table>', 'category': 'Table', 'source': 'https://drive.google.com/file/d/1aA6L2AR3g0CR-PW03HEZZo4NaVlKpaP7/view'})You can also process a folder with a mix of
https://python.langchain.com/docs/integrations/document_loaders/google_drive
82d135932257-5
can also process a folder with a mix of files and Google Docs/Sheets using the following pattern:folder_id = "1asMOHY1BqBS84JcRbOag5LOJac74gpmD"loader = GoogleDriveLoader( folder_id=folder_id, file_loader_cls=UnstructuredFileIOLoader, file_loader_kwargs={"mode": "elements"},)docs = loader.load()docs[0] Document(page_content='\n \n \n Team\n Location\n Stanley Cups\n \n \n Blues\n STL\n 1\n \n \n Flyers\n PHI\n 2\n \n \n Maple Leafs\n TOR\n 13\n \n \n', metadata={'filetype': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'page_number': 1, 'page_name': 'Stanley Cups', 'text_as_html': '<table border="1" class="dataframe">\n <tbody>\n <tr>\n <td>Team</td>\n <td>Location</td>\n <td>Stanley Cups</td>\n </tr>\n <tr>\n <td>Blues</td>\n <td>STL</td>\n
https://python.langchain.com/docs/integrations/document_loaders/google_drive
82d135932257-6
<td>STL</td>\n <td>1</td>\n </tr>\n <tr>\n <td>Flyers</td>\n <td>PHI</td>\n <td>2</td>\n </tr>\n <tr>\n <td>Maple Leafs</td>\n <td>TOR</td>\n <td>13</td>\n </tr>\n </tbody>\n</table>', 'category': 'Table', 'source': 'https://drive.google.com/file/d/1aA6L2AR3g0CR-PW03HEZZo4NaVlKpaP7/view'})PreviousGoogle Cloud Storage FileNextGrobidPrerequisites🧑 Instructions for ingesting your Google Docs dataPassing in Optional File LoadersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/google_drive
87af9ae43d32-0
EPub | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/epub
87af9ae43d32-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersEPubOn this pageEPubEPUB is an e-book file format that uses the ".epub" file extension. The term is short for electronic publication and is sometimes
https://python.langchain.com/docs/integrations/document_loaders/epub
87af9ae43d32-2
that uses the ".epub" file extension. The term is short for electronic publication and is sometimes styled ePub. EPUB is supported by many e-readers, and compatible software is available for most smartphones, tablets, and computers.This covers how to load .epub documents into the Document format that we can use downstream. You'll need to install the pandoc package for this loader to work.#!pip install pandocfrom langchain.document_loaders import UnstructuredEPubLoaderloader = UnstructuredEPubLoader("winter-sports.epub")data = loader.load()Retain Elements​Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".loader = UnstructuredEPubLoader("winter-sports.epub", mode="elements")data = loader.load()data[0] Document(page_content='The Project Gutenberg eBook of Winter Sports in\nSwitzerland, by E. F. Benson', lookup_str='', metadata={'source': 'winter-sports.epub', 'page_number': 1, 'category': 'Title'}, lookup_index=0)PreviousEmbaasNextEverNoteRetain ElementsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/epub
d8639420221f-0
Cube Semantic Layer | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/cube_semantic
d8639420221f-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersCube Semantic LayerOn this pageCube Semantic LayerThis notebook demonstrates the process of retrieving Cube's data model metadata in a format suitable for passing to LLMs as embeddings, thereby
https://python.langchain.com/docs/integrations/document_loaders/cube_semantic
d8639420221f-2
retrieving Cube's data model metadata in a format suitable for passing to LLMs as embeddings, thereby enhancing contextual information.About Cube​Cube is the Semantic Layer for building data apps. It helps data engineers and application developers access data from modern data stores, organize it into consistent definitions, and deliver it to every application.Cube’s data model provides structure and definitions that are used as a context for LLM to understand data and generate correct queries. LLM doesn’t need to navigate complex joins and metrics calculations because Cube abstracts those and provides a simple interface that operates on the business-level terminology, instead of SQL table and column names. This simplification helps LLM to be less error-prone and avoid hallucinations.Example​Input arguments (mandatory)Cube Semantic Loader requires 2 arguments:cube_api_url: The URL of your Cube's deployment REST API. Please refer to the Cube documentation for more information on configuring the base path.cube_api_token: The authentication token generated based on your Cube's API secret. Please refer to the Cube documentation for instructions on generating JSON Web Tokens (JWT).Input arguments (optional)load_dimension_values: Whether to load dimension values for every string dimension or not.dimension_values_limit: Maximum number of dimension values to load.dimension_values_max_retries: Maximum number of retries to load dimension values.dimension_values_retry_delay: Delay between retries to load dimension values.import jwtfrom langchain.document_loaders import CubeSemanticLoaderapi_url = "https://api-example.gcp-us-central1.cubecloudapp.dev/cubejs-api/v1/meta"cubejs_api_secret = "api-secret-here"security_context = {}# Read more about security context here: https://cube.dev/docs/securityapi_token = jwt.encode(security_context, cubejs_api_secret, algorithm="HS256")loader = CubeSemanticLoader(api_url, api_token)documents = loader.load()Returns a list of documents with the following
https://python.langchain.com/docs/integrations/document_loaders/cube_semantic
d8639420221f-3
api_token)documents = loader.load()Returns a list of documents with the following attributes:page_contentmetadatatable_namecolumn_namecolumn_data_typecolumn_titlecolumn_descriptioncolumn_valuespage_content='Users View City, None' metadata={'table_name': 'users_view', 'column_name': 'users_view.city', 'column_data_type': 'string', 'column_title': 'Users View City', 'column_description': 'None', 'column_member_type': 'dimension', 'column_values': ['Austin', 'Chicago', 'Los Angeles', 'Mountain View', 'New York', 'Palo Alto', 'San Francisco', 'Seattle']}PreviousCSVNextDatadog LogsAbout CubeExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/cube_semantic
560674f360a0-0
Stripe | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/stripe
560674f360a0-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersStripeStripeStripe is an Irish-American financial services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile
https://python.langchain.com/docs/integrations/document_loaders/stripe
560674f360a0-2
company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.This notebook covers how to load data from the Stripe REST API into a format that can be ingested into LangChain, along with example usage for vectorization.import osfrom langchain.document_loaders import StripeLoaderfrom langchain.indexes import VectorstoreIndexCreatorThe Stripe API requires an access token, which can be found inside of the Stripe dashboard.This document loader also requires a resource option which defines what data you want to load.Following resources are available:balance_transations Documentationcharges Documentationcustomers Documentationevents Documentationrefunds Documentationdisputes Documentationstripe_loader = StripeLoader("charges")# Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([stripe_loader])stripe_doc_retriever = index.vectorstore.as_retriever()PreviousSpreedlyNextSubtitleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/stripe
c521b4604dc6-0
Image captions | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/image_captions
c521b4604dc6-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersImage captionsOn this pageImage captionsBy default, the loader utilizes the pre-trained Salesforce BLIP image captioning model.This notebook shows how to use the ImageCaptionLoader to generate
https://python.langchain.com/docs/integrations/document_loaders/image_captions
c521b4604dc6-2
Salesforce BLIP image captioning model.This notebook shows how to use the ImageCaptionLoader to generate a query-able index of image captions#!pip install transformersfrom langchain.document_loaders import ImageCaptionLoaderPrepare a list of image urls from Wikimedia​list_image_urls = [ "https://upload.wikimedia.org/wikipedia/commons/thumb/5/5a/Hyla_japonica_sep01.jpg/260px-Hyla_japonica_sep01.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/7/71/Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg/270px-Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg/251px-Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Passion_fruits_-_whole_and_halved.jpg/270px-Passion_fruits_-_whole_and_halved.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Messier83_-_Heic1403a.jpg/277px-Messier83_-_Heic1403a.jpg",
https://python.langchain.com/docs/integrations/document_loaders/image_captions
c521b4604dc6-3
"https://upload.wikimedia.org/wikipedia/commons/thumb/b/b6/2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg/288px-2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg/224px-Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg",]Create the loader​loader = ImageCaptionLoader(path_images=list_image_urls)list_docs = loader.load()list_docs /Users/saitosean/dev/langchain/.venv/lib/python3.10/site-packages/transformers/generation/utils.py:1313: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation. warnings.warn( [Document(page_content='an image of a frog on a flower [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5a/Hyla_japonica_sep01.jpg/260px-Hyla_japonica_sep01.jpg'}),
https://python.langchain.com/docs/integrations/document_loaders/image_captions
c521b4604dc6-4
Document(page_content='an image of a shark swimming in the ocean [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/7/71/Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg/270px-Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg'}), Document(page_content='an image of a painting of a battle scene [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg/251px-Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg'}), Document(page_content='an image of a passion fruit and a half cut passion [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Passion_fruits_-_whole_and_halved.jpg/270px-Passion_fruits_-_whole_and_halved.jpg'}), Document(page_content='an image of the spiral galaxy [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Messier83_-_Heic1403a.jpg/277px-Messier83_-_Heic1403a.jpg'}), Document(page_content='an image of a man on skis in the snow [SEP]', metadata={'image_path':
https://python.langchain.com/docs/integrations/document_loaders/image_captions
c521b4604dc6-5
image of a man on skis in the snow [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/b/b6/2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg/288px-2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg'}), Document(page_content='an image of a flower in the dark [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg/224px-Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg'})]from PIL import Imageimport requestsImage.open(requests.get(list_image_urls[0], stream=True).raw).convert("RGB") ![png](_image_captions_files/output_7_0.png) Create the index​from langchain.indexes import VectorstoreIndexCreatorindex = VectorstoreIndexCreator().from_loaders([loader]) /Users/saitosean/dev/langchain/.venv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from
https://python.langchain.com/docs/integrations/document_loaders/image_captions
c521b4604dc6-6
https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm /Users/saitosean/dev/langchain/.venv/lib/python3.10/site-packages/transformers/generation/utils.py:1313: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation. warnings.warn( Using embedded DuckDB without persistence: data will be transientQuery​query = "What's the painting about?"index.query(query) ' The painting is about a battle scene.'query = "What kind of images are there?"index.query(query) ' There are images of a spiral galaxy, a painting of a battle scene, a flower in the dark, and a frog on a flower.'PreviousImagesNextIMSDbPrepare a list of image urls from WikimediaCreate the loaderCreate the indexQueryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/image_captions
e5517a6b6009-0
WhatsApp Chat | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/whatsapp_chat
e5517a6b6009-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersWhatsApp ChatWhatsApp ChatWhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP)
https://python.langchain.com/docs/integrations/document_loaders/whatsapp_chat
e5517a6b6009-2
cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.This notebook covers how to load data from the WhatsApp Chats into a format that can be ingested into LangChain.from langchain.document_loaders import WhatsAppChatLoaderloader = WhatsAppChatLoader("example_data/whatsapp_chat.txt")loader.load()PreviousWebBaseLoaderNextWikipediaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/whatsapp_chat
85ad3557522c-0
Alibaba Cloud MaxCompute | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/alibaba_cloud_maxcompute
85ad3557522c-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersAlibaba Cloud MaxComputeOn this pageAlibaba Cloud MaxComputeAlibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed,
https://python.langchain.com/docs/integrations/document_loaders/alibaba_cloud_maxcompute
85ad3557522c-2
Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security.The MaxComputeLoader lets you execute a MaxCompute SQL query and loads the results as one document per row.pip install pyodps Collecting pyodps Downloading pyodps-0.11.4.post0-cp39-cp39-macosx_10_9_universal2.whl (2.0 MB) ���������������������������������������� 2.0/2.0 MB 1.7 MB/s eta 0:00:0000:0100:010m Requirement already satisfied: charset-normalizer>=2 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (3.1.0) Requirement already satisfied: urllib3<2.0,>=1.26.0 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (1.26.15) Requirement already satisfied:
https://python.langchain.com/docs/integrations/document_loaders/alibaba_cloud_maxcompute
85ad3557522c-3
(from pyodps) (1.26.15) Requirement already satisfied: idna>=2.5 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (3.4) Requirement already satisfied: certifi>=2017.4.17 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (2023.5.7) Installing collected packages: pyodps Successfully installed pyodps-0.11.4.post0Basic Usage​To instantiate the loader you'll need a SQL query to execute, your MaxCompute endpoint and project name, and you access ID and secret access key. The access ID and secret access key can either be passed in direct via the access_id and secret_access_key parameters or they can be set as environment variables MAX_COMPUTE_ACCESS_ID and MAX_COMPUTE_SECRET_ACCESS_KEY.from langchain.document_loaders import MaxComputeLoaderbase_query = """SELECT *FROM ( SELECT 1 AS id, 'content1' AS content, 'meta_info1' AS meta_info UNION ALL SELECT 2 AS id, 'content2' AS content, 'meta_info2' AS meta_info UNION ALL SELECT 3 AS id, 'content3' AS content, 'meta_info3' AS meta_info) mydata;"""endpoint = "<ENDPOINT>"project = "<PROJECT>"ACCESS_ID = "<ACCESS ID>"SECRET_ACCESS_KEY = "<SECRET ACCESS KEY>"loader = MaxComputeLoader.from_params( base_query, endpoint, project, access_id=ACCESS_ID, secret_access_key=SECRET_ACCESS_KEY,)data = loader.load()print(data)
https://python.langchain.com/docs/integrations/document_loaders/alibaba_cloud_maxcompute
85ad3557522c-4
secret_access_key=SECRET_ACCESS_KEY,)data = loader.load()print(data) [Document(page_content='id: 1\ncontent: content1\nmeta_info: meta_info1', metadata={}), Document(page_content='id: 2\ncontent: content2\nmeta_info: meta_info2', metadata={}), Document(page_content='id: 3\ncontent: content3\nmeta_info: meta_info3', metadata={})]print(data[0].page_content) id: 1 content: content1 meta_info: meta_info1print(data[0].metadata) {}Specifying Which Columns are Content vs Metadata​You can configure which subset of columns should be loaded as the contents of the Document and which as the metadata using the page_content_columns and metadata_columns parameters.loader = MaxComputeLoader.from_params( base_query, endpoint, project, page_content_columns=["content"], # Specify Document page content metadata_columns=["id", "meta_info"], # Specify Document metadata access_id=ACCESS_ID, secret_access_key=SECRET_ACCESS_KEY,)data = loader.load()print(data[0].page_content) content: content1print(data[0].metadata) {'id': 1, 'meta_info': 'meta_info1'}PreviousAirtableNextApify DatasetBasic UsageSpecifying Which Columns are Content vs MetadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/alibaba_cloud_maxcompute
ee1def5f844d-0
Microsoft Excel | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/excel
ee1def5f844d-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersMicrosoft ExcelMicrosoft ExcelThe UnstructuredExcelLoader is used to load Microsoft Excel files. The loader works with both .xlsx and .xls files. The page content will be the
https://python.langchain.com/docs/integrations/document_loaders/excel
ee1def5f844d-2
files. The loader works with both .xlsx and .xls files. The page content will be the raw text of the Excel file. If you use the loader in "elements" mode, an HTML representation of the Excel file will be available in the document metadata under the text_as_html key.from langchain.document_loaders import UnstructuredExcelLoaderloader = UnstructuredExcelLoader("example_data/stanley-cups.xlsx", mode="elements")docs = loader.load()docs[0] Document(page_content='\n \n \n Team\n Location\n Stanley Cups\n \n \n Blues\n STL\n 1\n \n \n Flyers\n PHI\n 2\n \n \n Maple Leafs\n TOR\n 13\n \n \n', metadata={'source': 'example_data/stanley-cups.xlsx', 'filename': 'stanley-cups.xlsx', 'file_directory': 'example_data', 'filetype': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'page_number': 1, 'page_name': 'Stanley Cups', 'text_as_html': '<table border="1" class="dataframe">\n <tbody>\n <tr>\n <td>Team</td>\n <td>Location</td>\n <td>Stanley Cups</td>\n </tr>\n <tr>\n
https://python.langchain.com/docs/integrations/document_loaders/excel
ee1def5f844d-3
</tr>\n <tr>\n <td>Blues</td>\n <td>STL</td>\n <td>1</td>\n </tr>\n <tr>\n <td>Flyers</td>\n <td>PHI</td>\n <td>2</td>\n </tr>\n <tr>\n <td>Maple Leafs</td>\n <td>TOR</td>\n <td>13</td>\n </tr>\n </tbody>\n</table>', 'category': 'Table'})PreviousNotebookNextFacebook ChatCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/excel
fdb63f9d8dc4-0
Trello | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/trello
fdb63f9d8dc4-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersTrelloOn this pageTrelloTrello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a
https://python.langchain.com/docs/integrations/document_loaders/trello
fdb63f9d8dc4-2
and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities.The TrelloLoader allows you to load cards from a Trello board and is implemented on top of py-trelloThis currently supports api_key/token only.Credentials generation: https://trello.com/power-ups/admin/Click in the manual token generation link to get the token.To specify the API key and token you can either set the environment variables TRELLO_API_KEY and TRELLO_TOKEN or you can pass api_key and token directly into the from_credentials convenience constructor method.This loader allows you to provide the board name to pull in the corresponding cards into Document objects.Notice that the board "name" is also called "title" in oficial documentation:https://support.atlassian.com/trello/docs/changing-a-boards-title-and-description/You can also specify several load parameters to include / remove different fields both from the document page_content properties and metadata.Features​Load cards from a Trello board.Filter cards based on their status (open or closed).Include card names, comments, and checklists in the loaded documents.Customize the additional metadata fields to include in the document.By default all card fields are included for the full text page_content and metadata accordinly.#!pip install py-trello beautifulsoup4# If you have already set the API key and token using environment variables,# you can skip this cell and comment out the `api_key` and `token` named arguments# in the initialization steps below.from getpass import getpassAPI_KEY = getpass()TOKEN = getpass() ········ ········from langchain.document_loaders import TrelloLoader# Get the open cards from "Awesome
https://python.langchain.com/docs/integrations/document_loaders/trello
fdb63f9d8dc4-3
langchain.document_loaders import TrelloLoader# Get the open cards from "Awesome Board"loader = TrelloLoader.from_credentials( "Awesome Board", api_key=API_KEY, token=TOKEN, card_filter="open",)documents = loader.load()print(documents[0].page_content)print(documents[0].metadata) Review Tech partner pages Comments: {'title': 'Review Tech partner pages', 'id': '6475357890dc8d17f73f2dcc', 'url': 'https://trello.com/c/b0OTZwkZ/1-review-tech-partner-pages', 'labels': ['Demand Marketing'], 'list': 'Done', 'closed': False, 'due_date': ''}# Get all the cards from "Awesome Board" but only include the# card list(column) as extra metadata.loader = TrelloLoader.from_credentials( "Awesome Board", api_key=API_KEY, token=TOKEN, extra_metadata=("list"),)documents = loader.load()print(documents[0].page_content)print(documents[0].metadata) Review Tech partner pages Comments: {'title': 'Review Tech partner pages', 'id': '6475357890dc8d17f73f2dcc', 'url': 'https://trello.com/c/b0OTZwkZ/1-review-tech-partner-pages', 'list': 'Done'}# Get the cards from "Another Board" and exclude the card name,# checklist and comments from the Document page_content text.loader = TrelloLoader.from_credentials( "test", api_key=API_KEY, token=TOKEN, include_card_name=False, include_checklist=False,
https://python.langchain.com/docs/integrations/document_loaders/trello
fdb63f9d8dc4-4
include_card_name=False, include_checklist=False, include_comments=False,)documents = loader.load()print("Document: " + documents[0].page_content)print(documents[0].metadata)PreviousTOMLNextTSVFeaturesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/trello
cf94e2a9c213-0
IMSDb | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/imsdb
cf94e2a9c213-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersIMSDbIMSDbIMSDb is the Internet Movie Script Database.This covers how to load IMSDb webpages into a document format that we can use downstream.from
https://python.langchain.com/docs/integrations/document_loaders/imsdb
cf94e2a9c213-2
Database.This covers how to load IMSDb webpages into a document format that we can use downstream.from langchain.document_loaders import IMSDbLoaderloader = IMSDbLoader("https://imsdb.com/scripts/BlacKkKlansman.html")data = loader.load()data[0].page_content[:500] '\n\r\n\r\n\r\n\r\n BLACKKKLANSMAN\r\n \r\n \r\n \r\n \r\n Written by\r\n\r\n Charlie Wachtel & David Rabinowitz\r\n\r\n and\r\n\r\n Kevin Willmott & Spike
https://python.langchain.com/docs/integrations/document_loaders/imsdb
cf94e2a9c213-3
Kevin Willmott & Spike Lee\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n FADE IN:\r\n \r\n SCENE FROM "GONE WITH'data[0].metadata {'source': 'https://imsdb.com/scripts/BlacKkKlansman.html'}PreviousImage captionsNextIuguCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/imsdb
579460cafa0c-0
chatgpt_loader | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/chatgpt_loader
579460cafa0c-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loaderschatgpt_loaderOn this pagechatgpt_loaderChatGPT Data​ChatGPT is an artificial intelligence (AI) chatbot developed by
https://python.langchain.com/docs/integrations/document_loaders/chatgpt_loader
579460cafa0c-2
Data​ChatGPT is an artificial intelligence (AI) chatbot developed by OpenAI.This notebook covers how to load conversations.json from your ChatGPT data export folder.You can get your data export by email by going to: https://chat.openai.com/ -> (Profile) - Settings -> Export data -> Confirm export.from langchain.document_loaders.chatgpt import ChatGPTLoaderloader = ChatGPTLoader(log_file="./example_data/fake_conversations.json", num_logs=1)loader.load() [Document(page_content="AI Overlords - AI on 2065-01-24 05:20:50: Greetings, humans. I am Hal 9000. You can trust me completely.\n\nAI Overlords - human on 2065-01-24 05:21:20: Nice to meet you, Hal. I hope you won't develop a mind of your own.\n\n", metadata={'source': './example_data/fake_conversations.json'})]PreviousBrowserlessNextCollege ConfidentialChatGPT DataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/chatgpt_loader
683ed21c0532-0
RST | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/rst
683ed21c0532-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersRSTOn this pageRSTA reStructured Text (RST) file is a file format for textual data used primarily in the Python programming language community for technical
https://python.langchain.com/docs/integrations/document_loaders/rst
683ed21c0532-2
file is a file format for textual data used primarily in the Python programming language community for technical documentation.UnstructuredRSTLoader​You can load data from RST files with UnstructuredRSTLoader using the following workflow.from langchain.document_loaders import UnstructuredRSTLoaderloader = UnstructuredRSTLoader(file_path="example_data/README.rst", mode="elements")docs = loader.load()print(docs[0]) page_content='Example Docs' metadata={'source': 'example_data/README.rst', 'filename': 'README.rst', 'file_directory': 'example_data', 'filetype': 'text/x-rst', 'page_number': 1, 'category': 'Title'}PreviousRocksetNextSitemapUnstructuredRSTLoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/rst
96e172686693-0
AWS S3 Directory | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/aws_s3_directory
96e172686693-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersAWS S3 DirectoryOn this pageAWS S3 DirectoryAmazon Simple Storage Service (Amazon S3) is an object storage serviceAWS S3 DirectoryThis covers how to load document
https://python.langchain.com/docs/integrations/document_loaders/aws_s3_directory
96e172686693-2
(Amazon S3) is an object storage serviceAWS S3 DirectoryThis covers how to load document objects from an AWS S3 Directory object.#!pip install boto3from langchain.document_loaders import S3DirectoryLoaderloader = S3DirectoryLoader("testing-hwc")loader.load()Specifying a prefix​You can also specify a prefix for more finegrained control over what files to load.loader = S3DirectoryLoader("testing-hwc", prefix="fake")loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)]PreviousAsyncHtmlLoaderNextAWS S3 FileSpecifying a prefixCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/aws_s3_directory
220ff1a2ddb8-0
LarkSuite (FeiShu) | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/larksuite
220ff1a2ddb8-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersLarkSuite (FeiShu)LarkSuite (FeiShu)LarkSuite is an enterprise collaboration platform developed by ByteDance.This notebook covers how to load
https://python.langchain.com/docs/integrations/document_loaders/larksuite
220ff1a2ddb8-2
is an enterprise collaboration platform developed by ByteDance.This notebook covers how to load data from the LarkSuite REST API into a format that can be ingested into LangChain, along with example usage for text summarization.The LarkSuite API requires an access token (tenant_access_token or user_access_token), checkout LarkSuite open platform document for API details.from getpass import getpassfrom langchain.document_loaders.larksuite import LarkSuiteDocLoaderDOMAIN = input("larksuite domain")ACCESS_TOKEN = getpass("larksuite tenant_access_token or user_access_token")DOCUMENT_ID = input("larksuite document id")from pprint import pprintlarksuite_loader = LarkSuiteDocLoader(DOMAIN, ACCESS_TOKEN, DOCUMENT_ID)docs = larksuite_loader.load()pprint(docs) [Document(page_content='Test Doc\nThis is a Test Doc\n\n1\n2\n3\n\n', metadata={'document_id': 'V76kdbd2HoBbYJxdiNNccajunPf', 'revision_id': 11, 'title': 'Test Doc'})]# see https://python.langchain.com/docs/use_cases/summarization for more detailsfrom langchain.chains.summarize import load_summarize_chainchain = load_summarize_chain(llm, chain_type="map_reduce")chain.run(docs)PreviousJupyter NotebookNextMastodonCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/larksuite
2f6ddc5bd88c-0
Wikipedia | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/wikipedia
2f6ddc5bd88c-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersWikipediaOn this pageWikipediaWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration
https://python.langchain.com/docs/integrations/document_loaders/wikipedia
2f6ddc5bd88c-2
encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.This notebook shows how to load wiki pages from wikipedia.org into the Document format that we use downstream.Installation​First, you need to install wikipedia python package.#!pip install wikipediaExamples​WikipediaLoader has these arguments:query: free text which used to find documents in Wikipediaoptional lang: default="en". Use it to search in a specific language part of Wikipediaoptional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), title, Summary. If True, other fields also downloaded.from langchain.document_loaders import WikipediaLoaderdocs = WikipediaLoader(query="HUNTER X HUNTER", load_max_docs=2).load()len(docs)docs[0].metadata # meta-information of the Documentdocs[0].page_content[:400] # a content of the DocumentPreviousWhatsApp ChatNextXMLInstallationExamplesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/wikipedia
9202e6d46162-0
Apify Dataset | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/apify_dataset
9202e6d46162-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersApify DatasetOn this pageApify DatasetApify Dataset is a scaleable append-only storage with sequential access built for storing structured web scraping results, such as a list of
https://python.langchain.com/docs/integrations/document_loaders/apify_dataset
9202e6d46162-2
append-only storage with sequential access built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are mainly used to save results of Apify Actors—serverless cloud programs for varius web scraping, crawling, and data extraction use cases.This notebook shows how to load Apify datasets to LangChain.Prerequisites​You need to have an existing dataset on the Apify platform. If you don't have one, please first check out this notebook on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs.#!pip install apify-clientFirst, import ApifyDatasetLoader into your source code:from langchain.document_loaders import ApifyDatasetLoaderfrom langchain.document_loaders.base import DocumentThen provide a function that maps Apify dataset record fields to LangChain Document format.For example, if your dataset items are structured like this:{ "url": "https://apify.com", "text": "Apify is the best web scraping and automation platform."}The mapping function in the code below will convert them to LangChain Document format, so that you can use them further with any LLM model (e.g. for question answering).loader = ApifyDatasetLoader( dataset_id="your-dataset-id", dataset_mapping_function=lambda dataset_item: Document( page_content=dataset_item["text"], metadata={"source": dataset_item["url"]} ),)data = loader.load()An example with question answering​In this example, we use data from a dataset to answer a question.from langchain.docstore.document import Documentfrom langchain.document_loaders import ApifyDatasetLoaderfrom langchain.indexes import VectorstoreIndexCreatorloader = ApifyDatasetLoader(
https://python.langchain.com/docs/integrations/document_loaders/apify_dataset
9202e6d46162-3
langchain.indexes import VectorstoreIndexCreatorloader = ApifyDatasetLoader( dataset_id="your-dataset-id", dataset_mapping_function=lambda item: Document( page_content=item["text"] or "", metadata={"source": item["url"]} ),)index = VectorstoreIndexCreator().from_loaders([loader])query = "What is Apify?"result = index.query_with_sources(query)print(result["answer"])print(result["sources"]) Apify is a platform for developing, running, and sharing serverless cloud programs. It enables users to create web scraping and automation tools and publish them on the Apify platform. https://docs.apify.com/platform/actors, https://docs.apify.com/platform/actors/running/actors-in-store, https://docs.apify.com/platform/security, https://docs.apify.com/platform/actors/examplesPreviousAlibaba Cloud MaxComputeNextArxivPrerequisitesAn example with question answeringCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/apify_dataset
02188152ff4c-0
Gutenberg | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/gutenberg
02188152ff4c-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersGutenbergGutenbergProject Gutenberg is an online library of free eBooks.This notebook covers how to load links to Gutenberg e-books into a document format that we can use downstream.from
https://python.langchain.com/docs/integrations/document_loaders/gutenberg
02188152ff4c-2
notebook covers how to load links to Gutenberg e-books into a document format that we can use downstream.from langchain.document_loaders import GutenbergLoaderloader = GutenbergLoader("https://www.gutenberg.org/cache/epub/69972/pg69972.txt")data = loader.load()data[0].page_content[:300] 'The Project Gutenberg eBook of The changed brides, by Emma Dorothy\r\n\n\nEliza Nevitte Southworth\r\n\n\n\r\n\n\nThis eBook is for the use of anyone anywhere in the United States and\r\n\n\nmost other parts of the world at no cost and with almost no restrictions\r\n\n\nwhatsoever. You may copy it, give it away or re-u'data[0].metadata {'source': 'https://www.gutenberg.org/cache/epub/69972/pg69972.txt'}PreviousGrobidNextHacker NewsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/gutenberg
2794cb5b77b4-0
YouTube transcripts | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/youtube_transcript
2794cb5b77b4-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersYouTube transcriptsOn this pageYouTube transcriptsYouTube is an online video sharing and social media platform created by Google.This notebook covers how to load documents from YouTube transcripts.from
https://python.langchain.com/docs/integrations/document_loaders/youtube_transcript
2794cb5b77b4-2
video sharing and social media platform created by Google.This notebook covers how to load documents from YouTube transcripts.from langchain.document_loaders import YoutubeLoader# !pip install youtube-transcript-apiloader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True)loader.load()Add video info​# ! pip install pytubeloader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True)loader.load()Add language preferences​Language param : It's a list of language codes in a descending priority, en by default.translation param : It's a translate preference when the youtube does'nt have your select language, en by default.loader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True, language=["en", "id"], translation="en",)loader.load()YouTube loader from Google Cloud​Prerequisites​Create a Google Cloud project or use an existing projectEnable the Youtube ApiAuthorize credentials for desktop apppip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib youtube-transcript-api🧑 Instructions for ingesting your Google Docs data​By default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_file keyword argument. Same thing with token.json. Note that token.json will be created automatically the first time you use the loader.GoogleApiYoutubeLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL:
https://python.langchain.com/docs/integrations/document_loaders/youtube_transcript
2794cb5b77b4-3
Note depending on your set up, the service_account_path needs to be set up. See here for more details.from langchain.document_loaders import GoogleApiClient, GoogleApiYoutubeLoader# Init the GoogleApiClientfrom pathlib import Pathgoogle_api_client = GoogleApiClient(credentials_path=Path("your_path_creds.json"))# Use a Channelyoutube_loader_channel = GoogleApiYoutubeLoader( google_api_client=google_api_client, channel_name="Reducible", captions_language="en",)# Use Youtube Idsyoutube_loader_ids = GoogleApiYoutubeLoader( google_api_client=google_api_client, video_ids=["TrdevFK_am4"], add_video_info=True)# returns a list of Documentsyoutube_loader_channel.load()PreviousLoading documents from a YouTube urlNextDocument transformersAdd video infoAdd language preferencesYouTube loader from Google CloudPrerequisites🧑 Instructions for ingesting your Google Docs dataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/youtube_transcript
6dc4e28f5a72-0
2Markdown | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/tomarkdown
6dc4e28f5a72-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loaders2Markdown2Markdown2markdown service transforms website content into structured markdown files.# You will need to get your own API key. See https://2markdown.com/loginapi_key =
https://python.langchain.com/docs/integrations/document_loaders/tomarkdown
6dc4e28f5a72-2
You will need to get your own API key. See https://2markdown.com/loginapi_key = ""from langchain.document_loaders import ToMarkdownLoaderloader = ToMarkdownLoader.from_api_key( url="https://python.langchain.com/en/latest/", api_key=api_key)docs = loader.load()print(docs[0].page_content) ## Contents - [Getting Started](#getting-started) - [Modules](#modules) - [Use Cases](#use-cases) - [Reference Docs](#reference-docs) - [LangChain Ecosystem](#langchain-ecosystem) - [Additional Resources](#additional-resources) ## Welcome to LangChain [\#](\#welcome-to-langchain "Permalink to this headline") **LangChain** is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model, but will also be: 1. _Data-aware_: connect a language model to other sources of data 2. _Agentic_: allow a language model to interact with its environment The LangChain framework is designed around these principles. This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see [here](https://docs.langchain.com/docs/). For the JavaScript documentation, see [here](https://js.langchain.com/docs/). ## Getting Started [\#](\#getting-started "Permalink to this headline") How to get started using LangChain to create
https://python.langchain.com/docs/integrations/document_loaders/tomarkdown
6dc4e28f5a72-3
to this headline") How to get started using LangChain to create an Language Model application. - [Quickstart Guide](https://python.langchain.com/en/latest/getting_started/getting_started.html) Concepts and terminology. - [Concepts and terminology](https://python.langchain.com/en/latest/getting_started/concepts.html) Tutorials created by community experts and presented on YouTube. - [Tutorials](https://python.langchain.com/en/latest/getting_started/tutorials.html) ## Modules [\#](\#modules "Permalink to this headline") These modules are the core abstractions which we view as the building blocks of any LLM-powered application. For each module LangChain provides standard, extendable interfaces. LanghChain also provides external integrations and even end-to-end implementations for off-the-shelf use. The docs for each module contain quickstart examples, how-to guides, reference docs, and conceptual guides. The modules are (from least to most complex): - [Models](https://python.langchain.com/en/latest/modules/models.html): Supported model types and integrations. - [Prompts](https://python.langchain.com/en/latest/modules/prompts.html): Prompt management, optimization, and serialization. - [Memory](https://python.langchain.com/en/latest/modules/memory.html): Memory refers to state that is persisted between calls of a chain/agent. -
https://python.langchain.com/docs/integrations/document_loaders/tomarkdown
6dc4e28f5a72-4
state that is persisted between calls of a chain/agent. - [Indexes](https://python.langchain.com/en/latest/modules/data_connection.html): Language models become much more powerful when combined with application-specific data - this module contains interfaces and integrations for loading, querying and updating external data. - [Chains](https://python.langchain.com/en/latest/modules/chains.html): Chains are structured sequences of calls (to an LLM or to a different utility). - [Agents](https://python.langchain.com/en/latest/modules/agents.html): An agent is a Chain in which an LLM, given a high-level directive and a set of tools, repeatedly decides an action, executes the action and observes the outcome until the high-level directive is complete. - [Callbacks](https://python.langchain.com/en/latest/modules/callbacks/getting_started.html): Callbacks let you log and stream the intermediate steps of any chain, making it easy to observe, debug, and evaluate the internals of an application. ## Use Cases [\#](\#use-cases "Permalink to this headline") Best practices and built-in implementations for common LangChain use cases: - [Autonomous Agents](https://python.langchain.com/en/latest/use_cases/autonomous_agents.html): Autonomous agents are long-running agents that take many steps in an attempt to accomplish an objective. Examples include AutoGPT and BabyAGI. - [Agent Simulations](https://python.langchain.com/en/latest/use_cases/agent_simulations.html): Putting agents in a sandbox and observing how they interact with each other and react to events can be an effective way to evaluate their long-range reasoning and planning abilities.
https://python.langchain.com/docs/integrations/document_loaders/tomarkdown
6dc4e28f5a72-5
to events can be an effective way to evaluate their long-range reasoning and planning abilities. - [Personal Assistants](https://python.langchain.com/en/latest/use_cases/personal_assistants.html): One of the primary LangChain use cases. Personal assistants need to take actions, remember interactions, and have knowledge about your data. - [Question Answering](https://python.langchain.com/en/latest/use_cases/question_answering.html): Another common LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer. - [Chatbots](https://python.langchain.com/en/latest/use_cases/chatbots.html): Language models love to chat, making this a very natural use of them. - [Querying Tabular Data](https://python.langchain.com/en/latest/use_cases/tabular.html): Recommended reading if you want to use language models to query structured data (CSVs, SQL, dataframes, etc). - [Code Understanding](https://python.langchain.com/en/latest/use_cases/code.html): Recommended reading if you want to use language models to analyze code. - [Interacting with APIs](https://python.langchain.com/en/latest/use_cases/apis.html): Enabling language models to interact with APIs is extremely powerful. It gives them access to up-to-date information and allows them to take actions. - [Extraction](https://python.langchain.com/en/latest/use_cases/extraction.html): Extract structured information from text. - [Summarization](https://python.langchain.com/en/latest/use_cases/summarization.html): Compressing longer documents. A type of Data-Augmented Generation. -
https://python.langchain.com/docs/integrations/document_loaders/tomarkdown
6dc4e28f5a72-6
longer documents. A type of Data-Augmented Generation. - [Evaluation](https://python.langchain.com/en/latest/use_cases/evaluation.html): Generative models are hard to evaluate with traditional metrics. One promising approach is to use language models themselves to do the evaluation. ## Reference Docs [\#](\#reference-docs "Permalink to this headline") Full documentation on all methods, classes, installation methods, and integration setups for LangChain. - [Reference Documentation](https://python.langchain.com/en/latest/reference.html) ## LangChain Ecosystem [\#](\#langchain-ecosystem "Permalink to this headline") Guides for how other companies/products can be used with LangChain. - [LangChain Ecosystem](https://python.langchain.com/en/latest/ecosystem.html) ## Additional Resources [\#](\#additional-resources "Permalink to this headline") Additional resources we think may be useful as you develop your application! - [LangChainHub](https://github.com/hwchase17/langchain-hub): The LangChainHub is a place to share and explore other prompts, chains, and agents. - [Gallery](https://python.langchain.com/en/latest/additional_resources/gallery.html): A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications. - [Deployments](https://python.langchain.com/en/latest/additional_resources/deployments.html): A collection of instructions, code snippets, and
https://python.langchain.com/docs/integrations/document_loaders/tomarkdown
6dc4e28f5a72-7
A collection of instructions, code snippets, and template repositories for deploying LangChain apps. - [Tracing](https://python.langchain.com/en/latest/additional_resources/tracing.html): A guide on using tracing in LangChain to visualize the execution of chains and agents. - [Model Laboratory](https://python.langchain.com/en/latest/additional_resources/model_laboratory.html): Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so. - [Discord](https://discord.gg/6adMQxSpJS): Join us on our Discord to discuss all things LangChain! - [YouTube](https://python.langchain.com/en/latest/additional_resources/youtube.html): A collection of the LangChain tutorials and videos. - [Production Support](https://forms.gle/57d8AmXBYp8PP8tZA): As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.PreviousTencent COS FileNextTOMLCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/tomarkdown
9f9568892a32-0
TOML | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/toml
9f9568892a32-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersTOMLTOMLTOML is a file format for configuration files. It is intended to be easy to read and write, and is designed to map unambiguously
https://python.langchain.com/docs/integrations/document_loaders/toml
9f9568892a32-2
It is intended to be easy to read and write, and is designed to map unambiguously to a dictionary. Its specification is open-source. TOML is implemented in many programming languages. The name TOML is an acronym for "Tom's Obvious, Minimal Language" referring to its creator, Tom Preston-Werner.If you need to load Toml files, use the TomlLoader.from langchain.document_loaders import TomlLoaderloader = TomlLoader("example_data/fake_rule.toml")rule = loader.load()rule [Document(page_content='{"internal": {"creation_date": "2023-05-01", "updated_date": "2022-05-01", "release": ["release_type"], "min_endpoint_version": "some_semantic_version", "os_list": ["operating_system_list"]}, "rule": {"uuid": "some_uuid", "name": "Fake Rule Name", "description": "Fake description of rule", "query": "process where process.name : \\"somequery\\"\\n", "threat": [{"framework": "MITRE ATT&CK", "tactic": {"name": "Execution", "id": "TA0002", "reference": "https://attack.mitre.org/tactics/TA0002/"}}]}}', metadata={'source': 'example_data/fake_rule.toml'})]Previous2MarkdownNextTrelloCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/toml
daf269555c1f-0
Iugu | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/iugu
daf269555c1f-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersIuguIuguIugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and
https://python.langchain.com/docs/integrations/document_loaders/iugu
daf269555c1f-2
(SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.This notebook covers how to load data from the Iugu REST API into a format that can be ingested into LangChain, along with example usage for vectorization.import osfrom langchain.document_loaders import IuguLoaderfrom langchain.indexes import VectorstoreIndexCreatorThe Iugu API requires an access token, which can be found inside of the Iugu dashboard.This document loader also requires a resource option which defines what data you want to load.Following resources are available:Documentation Documentationiugu_loader = IuguLoader("charges")# Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([iugu_loader])iugu_doc_retriever = index.vectorstore.as_retriever()PreviousIMSDbNextJoplinCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/iugu
59984f065687-0
Jupyter Notebook | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/jupyter_notebook