id
stringlengths 14
15
| text
stringlengths 23
2.21k
| source
stringlengths 52
97
|
---|---|---|
0451ebcf2053-0 | GitBook | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/gitbook |
0451ebcf2053-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersGitBookOn this pageGitBookGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.This notebook shows how to pull page data | https://python.langchain.com/docs/integrations/document_loaders/gitbook |
0451ebcf2053-2 | teams can document everything from products to internal knowledge bases and APIs.This notebook shows how to pull page data from any GitBook.from langchain.document_loaders import GitbookLoaderLoad from single GitBook page​loader = GitbookLoader("https://docs.gitbook.com")page_data = loader.load()page_data [Document(page_content='Introduction to GitBook\nGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.\nWe want to help \nteams to work more efficiently\n by creating a simple yet powerful platform for them to \nshare their knowledge\n.\nOur mission is to make a \nuser-friendly\n and \ncollaborative\n product for everyone to create, edit and share knowledge through documentation.\nPublish your documentation in 5 easy steps\nImport\n\nMove your existing content to GitBook with ease.\nGit Sync\n\nBenefit from our bi-directional synchronisation with GitHub and GitLab.\nOrganise your content\n\nCreate pages and spaces and organize them into collections\nCollaborate\n\nInvite other users and collaborate asynchronously with ease.\nPublish your docs\n\nShare your documentation with selected users or with everyone.\nNext\n - Getting started\nOverview\nLast modified \n3mo ago', lookup_str='', metadata={'source': 'https://docs.gitbook.com', 'title': 'Introduction to GitBook'}, lookup_index=0)]Load from all paths in a given GitBook​For this to work, the GitbookLoader needs to be initialized with the root path (https://docs.gitbook.com in this example) and have load_all_paths set to True.loader = GitbookLoader("https://docs.gitbook.com", load_all_paths=True)all_pages_data = loader.load() Fetching text from https://docs.gitbook.com/ Fetching text from | https://python.langchain.com/docs/integrations/document_loaders/gitbook |
0451ebcf2053-3 | Fetching text from https://docs.gitbook.com/ Fetching text from https://docs.gitbook.com/getting-started/overview Fetching text from https://docs.gitbook.com/getting-started/import Fetching text from https://docs.gitbook.com/getting-started/git-sync Fetching text from https://docs.gitbook.com/getting-started/content-structure Fetching text from https://docs.gitbook.com/getting-started/collaboration Fetching text from https://docs.gitbook.com/getting-started/publishing Fetching text from https://docs.gitbook.com/tour/quick-find Fetching text from https://docs.gitbook.com/tour/editor Fetching text from https://docs.gitbook.com/tour/customization Fetching text from https://docs.gitbook.com/tour/member-management Fetching text from https://docs.gitbook.com/tour/pdf-export Fetching text from https://docs.gitbook.com/tour/activity-history Fetching text from https://docs.gitbook.com/tour/insights Fetching text from https://docs.gitbook.com/tour/notifications Fetching text from https://docs.gitbook.com/tour/internationalization Fetching text from https://docs.gitbook.com/tour/keyboard-shortcuts Fetching text from https://docs.gitbook.com/tour/seo Fetching text from https://docs.gitbook.com/advanced-guides/custom-domain Fetching text from https://docs.gitbook.com/advanced-guides/advanced-sharing-and-security Fetching text from https://docs.gitbook.com/advanced-guides/integrations Fetching text from | https://python.langchain.com/docs/integrations/document_loaders/gitbook |
0451ebcf2053-4 | https://docs.gitbook.com/advanced-guides/integrations Fetching text from https://docs.gitbook.com/billing-and-admin/account-settings Fetching text from https://docs.gitbook.com/billing-and-admin/plans Fetching text from https://docs.gitbook.com/troubleshooting/faqs Fetching text from https://docs.gitbook.com/troubleshooting/hard-refresh Fetching text from https://docs.gitbook.com/troubleshooting/report-bugs Fetching text from https://docs.gitbook.com/troubleshooting/connectivity-issues Fetching text from https://docs.gitbook.com/troubleshooting/supportprint(f"fetched {len(all_pages_data)} documents.")# show second documentall_pages_data[2] fetched 28 documents. Document(page_content="Import\nFind out how to easily migrate your existing documentation and which formats are supported.\nThe import function allows you to migrate and unify existing documentation in GitBook. You can choose to import single or multiple pages although limits apply. \nPermissions\nAll members with editor permission or above can use the import feature.\nSupported formats\nGitBook supports imports from websites or files that are:\nMarkdown (.md or .markdown)\nHTML (.html)\nMicrosoft Word (.docx).\nWe also support import from:\nConfluence\nNotion\nGitHub Wiki\nQuip\nDropbox Paper\nGoogle Docs\nYou can also upload a ZIP\n \ncontaining HTML or Markdown files when \nimporting multiple pages.\nNote: this feature is in beta.\nFeel free to suggest import sources we don't support yet and \nlet us know\n if you have any issues.\nImport panel\nWhen you create a new space, you'll have the option to import content straight away:\nThe new page menu\nImport a page or subpage by selecting \nImport | https://python.langchain.com/docs/integrations/document_loaders/gitbook |
0451ebcf2053-5 | content straight away:\nThe new page menu\nImport a page or subpage by selecting \nImport Page\n from the New Page menu, or \nImport Subpage\n in the page action menu, found in the table of contents:\nImport from the page action menu\nWhen you choose your input source, instructions will explain how to proceed.\nAlthough GitBook supports importing content from different kinds of sources, the end result might be different from your source due to differences in product features and document format.\nLimits\nGitBook currently has the following limits for imported content:\nThe maximum number of pages that can be uploaded in a single import is \n20.\nThe maximum number of files (images etc.) that can be uploaded in a single import is \n20.\nGetting started - \nPrevious\nOverview\nNext\n - Getting started\nGit Sync\nLast modified \n4mo ago", lookup_str='', metadata={'source': 'https://docs.gitbook.com/getting-started/import', 'title': 'Import'}, lookup_index=0)PreviousGitNextGitHubLoad from single GitBook pageLoad from all paths in a given GitBookCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/gitbook |
38f56933f534-0 | BibTeX | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/bibtex |
38f56933f534-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersBibTeXOn this pageBibTeXBibTeX is a file format and reference management system commonly used in conjunction with LaTeX typesetting. It serves as a way to | https://python.langchain.com/docs/integrations/document_loaders/bibtex |
38f56933f534-2 | format and reference management system commonly used in conjunction with LaTeX typesetting. It serves as a way to organize and store bibliographic information for academic and research documents.BibTeX files have a .bib extension and consist of plain text entries representing references to various publications, such as books, articles, conference papers, theses, and more. Each BibTeX entry follows a specific structure and contains fields for different bibliographic details like author names, publication title, journal or book title, year of publication, page numbers, and more.Bibtex files can also store the path to documents, such as .pdf files that can be retrieved.Installation​First, you need to install bibtexparser and PyMuPDF.#!pip install bibtexparser pymupdfExamples​BibtexLoader has these arguments:file_path: the path the the .bib bibtex fileoptional max_docs: default=None, i.e. not limit. Use it to limit number of retrieved documents.optional max_content_chars: default=4000. Use it to limit the number of characters in a single document.optional load_extra_meta: default=False. By default only the most important fields from the bibtex entries: Published (publication year), Title, Authors, Summary, Journal, Keywords, and URL. If True, it will also try to load return entry_id, note, doi, and links fields. optional file_pattern: default=r'[^:]+\.pdf'. Regex pattern to find files in the file entry. Default pattern supports Zotero flavour bibtex style and bare file path.from langchain.document_loaders import BibtexLoader# Create a dummy bibtex file and download a pdf.import urllib.requesturllib.request.urlretrieve( "https://www.fourmilab.ch/etexts/einstein/specrel/specrel.pdf", "einstein1905.pdf")bibtex_text = """ | https://python.langchain.com/docs/integrations/document_loaders/bibtex |
38f56933f534-3 | "einstein1905.pdf")bibtex_text = """ @article{einstein1915, title={Die Feldgleichungen der Gravitation}, abstract={Die Grundgleichungen der Gravitation, die ich hier entwickeln werde, wurden von mir in einer Abhandlung: ,,Die formale Grundlage der allgemeinen Relativit{\"a}tstheorie`` in den Sitzungsberichten der Preu{\ss}ischen Akademie der Wissenschaften 1915 ver{\"o}ffentlicht.}, author={Einstein, Albert}, journal={Sitzungsberichte der K{\"o}niglich Preu{\ss}ischen Akademie der Wissenschaften}, volume={1915}, number={1}, pages={844--847}, year={1915}, doi={10.1002/andp.19163540702}, link={https://onlinelibrary.wiley.com/doi/abs/10.1002/andp.19163540702}, file={einstein1905.pdf} } """# save bibtex_text to biblio.bib filewith open("./biblio.bib", "w") as file: file.write(bibtex_text)docs = BibtexLoader("./biblio.bib").load()docs[0].metadata {'id': 'einstein1915', 'published_year': '1915', 'title': 'Die | https://python.langchain.com/docs/integrations/document_loaders/bibtex |
38f56933f534-4 | 'published_year': '1915', 'title': 'Die Feldgleichungen der Gravitation', 'publication': 'Sitzungsberichte der K{"o}niglich Preu{\\ss}ischen Akademie der Wissenschaften', 'authors': 'Einstein, Albert', 'abstract': 'Die Grundgleichungen der Gravitation, die ich hier entwickeln werde, wurden von mir in einer Abhandlung: ,,Die formale Grundlage der allgemeinen Relativit{"a}tstheorie`` in den Sitzungsberichten der Preu{\\ss}ischen Akademie der Wissenschaften 1915 ver{"o}ffentlicht.', 'url': 'https://doi.org/10.1002/andp.19163540702'}print(docs[0].page_content[:400]) # all pages of the pdf content ON THE ELECTRODYNAMICS OF MOVING BODIES By A. EINSTEIN June 30, 1905 It is known that Maxwell’s electrodynamics—as usually understood at the present time—when applied to moving bodies, leads to asymmetries which do not appear to be inherent in the phenomena. Take, for example, the recipro- cal electrodynamic action of a magnet and a conductor. The observable phe- nomenon here depends only on the rPreviousAzure Blob Storage FileNextBiliBiliInstallationExamplesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/bibtex |
b3615910394a-0 | Airtable | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/airtable |
b3615910394a-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersAirtableAirtablepip install pyairtablefrom langchain.document_loaders import AirtableLoaderGet your API key here.Get ID of your base here.Get your table ID from | https://python.langchain.com/docs/integrations/document_loaders/airtable |
b3615910394a-2 | import AirtableLoaderGet your API key here.Get ID of your base here.Get your table ID from the table url as shown here.api_key = "xxx"base_id = "xxx"table_id = "xxx"loader = AirtableLoader(api_key, table_id, base_id)docs = loader.load()Returns each table row as dict.len(docs) 3eval(docs[0].page_content) {'id': 'recF3GbGZCuh9sXIQ', 'createdTime': '2023-06-09T04:47:21.000Z', 'fields': {'Priority': 'High', 'Status': 'In progress', 'Name': 'Document Splitters'}}PreviousAirbyte JSONNextAlibaba Cloud MaxComputeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/airtable |
a50339e882bc-0 | Confluence | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/confluence |
a50339e882bc-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersConfluenceOn this pageConfluenceConfluence is a wiki collaboration platform that saves and organizes all of the project-related material. Confluence is a knowledge base that primarily handles | https://python.langchain.com/docs/integrations/document_loaders/confluence |
a50339e882bc-2 | saves and organizes all of the project-related material. Confluence is a knowledge base that primarily handles content management activities. A loader for Confluence pages.This currently supports username/api_key, Oauth2 login. Additionally, on-prem installations also support token authentication. Specify a list page_id-s and/or space_key to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned.You can also specify a boolean include_attachments to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG, SVG, Word and Excel.Hint: space_key and page_id can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>Before using ConfluenceLoader make sure you have the latest version of the atlassian-python-api package installed:#!pip install atlassian-python-apiExamples​Username and Password or Username and API Token (Atlassian Cloud only)​This example authenticates using either a username and password or, if you're connecting to an Atlassian Cloud hosted version of Confluence, a username and an API Token. | https://python.langchain.com/docs/integrations/document_loaders/confluence |
a50339e882bc-3 | You can generate an API token at: https://id.atlassian.com/manage-profile/security/api-tokens.The limit parameter specifies how many documents will be retrieved in a single call, not how many documents will be retrieved in total.
By default the code will return up to 1000 documents in 50 documents batches. To control the total number of documents use the max_pages parameter.
Plese note the maximum value for the limit parameter in the atlassian-python-api package is currently 100. from langchain.document_loaders import ConfluenceLoaderloader = ConfluenceLoader( url="https://yoursite.atlassian.com/wiki", username="me", api_key="12345")documents = loader.load(space_key="SPACE", include_attachments=True, limit=50)Personal Access Token (Server/On-Prem only)​This method is valid for the Data Center/Server on-prem edition only.
For more information on how to generate a Personal Access Token (PAT) check the official Confluence documentation at: https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html.
When using a PAT you provide only the token value, you cannot provide a username. | https://python.langchain.com/docs/integrations/document_loaders/confluence |
a50339e882bc-4 | When using a PAT you provide only the token value, you cannot provide a username.
Please note that ConfluenceLoader will run under the permissions of the user that generated the PAT and will only be able to load documents for which said user has access to. from langchain.document_loaders import ConfluenceLoaderloader = ConfluenceLoader(url="https://yoursite.atlassian.com/wiki", token="12345")documents = loader.load( space_key="SPACE", include_attachments=True, limit=50, max_pages=50)PreviousCollege ConfidentialNextCoNLL-UExamplesUsername and Password or Username and API Token (Atlassian Cloud only)Personal Access Token (Server/On-Prem only)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/confluence |
ab0fac60618a-0 | Org-mode | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/org_mode |
ab0fac60618a-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersOrg-modeOn this pageOrg-modeA Org Mode document is a document editing, formatting, and organizing mode, designed for notes, planning, and authoring within the free software | https://python.langchain.com/docs/integrations/document_loaders/org_mode |
ab0fac60618a-2 | formatting, and organizing mode, designed for notes, planning, and authoring within the free software text editor Emacs.UnstructuredOrgModeLoader​You can load data from Org-mode files with UnstructuredOrgModeLoader using the following workflow.from langchain.document_loaders import UnstructuredOrgModeLoaderloader = UnstructuredOrgModeLoader(file_path="example_data/README.org", mode="elements")docs = loader.load()print(docs[0]) page_content='Example Docs' metadata={'source': 'example_data/README.org', 'filename': 'README.org', 'file_directory': 'example_data', 'filetype': 'text/org', 'page_number': 1, 'category': 'Title'}PreviousOpen City DataNextPandas DataFrameUnstructuredOrgModeLoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/org_mode |
10a505d34cff-0 | Tencent COS File | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/tencent_cos_file |
10a505d34cff-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersTencent COS FileTencent COS FileThis covers how to load document object from a Tencent COS File.#! pip install cos-python-sdk-v5from langchain.document_loaders | https://python.langchain.com/docs/integrations/document_loaders/tencent_cos_file |
10a505d34cff-2 | from a Tencent COS File.#! pip install cos-python-sdk-v5from langchain.document_loaders import TencentCOSFileLoaderfrom qcloud_cos import CosConfigconf = CosConfig( Region="your cos region", SecretId="your cos secret_id", SecretKey="your cos secret_key",)loader = TencentCOSFileLoader(conf=conf, bucket="you_cos_bucket", key="fake.docx")loader.load()PreviousTencent COS DirectoryNext2MarkdownCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/tencent_cos_file |
9f6bd6346cc5-0 | iFixit | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/ifixit |
9f6bd6346cc5-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersiFixitOn this pageiFixitiFixit is the largest, open repair community on the web. The site contains nearly 100k repair manuals, 200k | https://python.langchain.com/docs/integrations/document_loaders/ifixit |
9f6bd6346cc5-2 | open repair community on the web. The site contains nearly 100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is licensed under CC-BY-NC-SA 3.0.This loader will allow you to download the text of a repair guide, text of Q&A's and wikis from devices on iFixit using their open APIs. It's incredibly useful for context related to technical documents and answers to questions about devices in the corpus of data on iFixit.from langchain.document_loaders import IFixitLoaderloader = IFixitLoader("https://www.ifixit.com/Teardown/Banana+Teardown/811")data = loader.load()data [Document(page_content="# Banana Teardown\nIn this teardown, we open a banana to see what's inside. Yellow and delicious, but most importantly, yellow.\n\n\n###Tools Required:\n\n - Fingers\n\n - Teeth\n\n - Thumbs\n\n\n###Parts Required:\n\n - None\n\n\n## Step 1\nTake one banana from the bunch.\nDon't squeeze too hard!\n\n\n## Step 2\nHold the banana in your left hand and grip the stem between your right thumb and forefinger.\n\n\n## Step 3\nPull the stem downward until the peel splits.\n\n\n## Step 4\nInsert your thumbs into the split of the peel and pull the two sides apart.\nExpose the top of the banana. It may be slightly squished from pulling on the stem, but this will not affect the flavor.\n\n\n## Step 5\nPull open the peel, starting from your original split, and opening it along the length of the banana.\n\n\n## Step 6\nRemove fruit from peel.\n\n\n## Step 7\nEat and enjoy!\nThis is where you'll need your | https://python.langchain.com/docs/integrations/document_loaders/ifixit |
9f6bd6346cc5-3 | Step 7\nEat and enjoy!\nThis is where you'll need your teeth.\nDo not choke on banana!\n", lookup_str='', metadata={'source': 'https://www.ifixit.com/Teardown/Banana+Teardown/811', 'title': 'Banana Teardown'}, lookup_index=0)]loader = IFixitLoader( "https://www.ifixit.com/Answers/View/318583/My+iPhone+6+is+typing+and+opening+apps+by+itself")data = loader.load()data [Document(page_content='# My iPhone 6 is typing and opening apps by itself\nmy iphone 6 is typing and opening apps by itself. How do i fix this. I just bought it last week.\nI restored as manufactures cleaned up the screen\nthe problem continues\n\n## 27 Answers\n\nFilter by: \n\nMost Helpful\nNewest\nOldest\n\n### Accepted Answer\nHi,\nWhere did you buy it? If you bought it from Apple or from an official retailer like Carphone warehouse etc. Then you\'ll have a year warranty and can get it replaced free.\nIf you bought it second hand, from a third part repair shop or online, then it may still have warranty, unless it is refurbished and has been repaired elsewhere.\nIf this is the case, it may be the screen that needs replacing to solve your issue.\nEither way, wherever you got it, it\'s best to return it and get a refund or a replacement device. :-)\n\n\n\n### Most Helpful Answer\nI had the same issues, screen freezing, opening apps by itself, selecting the screens and typing on it\'s own. I first suspected aliens and then ghosts and then hackers.\niPhone 6 is weak physically and tend to bend on pressure. And my phone had no case or cover.\nI took | https://python.langchain.com/docs/integrations/document_loaders/ifixit |
9f6bd6346cc5-4 | physically and tend to bend on pressure. And my phone had no case or cover.\nI took the phone to apple stores and they said sensors need to be replaced and possibly screen replacement as well. My phone is just 17 months old.\nHere is what I did two days ago and since then it is working like a charm..\nHold the phone in portrait (as if watching a movie). Twist it very very gently. do it few times.Rest the phone for 10 mins (put it on a flat surface). You can now notice those self typing things gone and screen getting stabilized.\nThen, reset the hardware (hold the power and home button till the screen goes off and comes back with apple logo). release the buttons when you see this.\nThen, connect to your laptop and log in to iTunes and reset your phone completely. (please take a back-up first).\nAnd your phone should be good to use again.\nWhat really happened here for me is that the sensors might have stuck to the screen and with mild twisting, they got disengaged/released.\nI posted this in Apple Community and the moderators deleted it, for the best reasons known to them.\nInstead of throwing away your phone (or selling cheaply), try this and you could be saving your phone.\nLet me know how it goes.\n\n\n\n### Other Answer\nIt was the charging cord! I bought a gas station braided cord and it was the culprit. Once I plugged my OEM cord into the phone the GHOSTS went away.\n\n\n\n### Other Answer\nI\'ve same issue that I just get resolved. I first tried to restore it from iCloud back, however it was not a software issue or any virus issue, so after restore same problem continues. Then I get my phone to local area iphone repairing lab, and they detected that it is an LCD issue. LCD get out of order without any reason (It was neither hit | https://python.langchain.com/docs/integrations/document_loaders/ifixit |
9f6bd6346cc5-5 | that it is an LCD issue. LCD get out of order without any reason (It was neither hit or nor slipped, but LCD get out of order all and sudden, while using it) it started opening things at random. I get LCD replaced with new one, that cost me $80.00 in total ($70.00 LCD charges + $10.00 as labor charges to fix it). iPhone is back to perfect mode now. It was iphone 6s. Thanks.\n\n\n\n### Other Answer\nI was having the same issue with my 6 plus, I took it to a repair shop, they opened the phone, disconnected the three ribbons the screen has, blew up and cleaned the connectors and connected the screen again and it solved the issue… it’s hardware, not software.\n\n\n\n### Other Answer\nHey.\nJust had this problem now. As it turns out, you just need to plug in your phone. I use a case and when I took it off I noticed that there was a lot of dust and dirt around the areas that the case didn\'t cover. I shined a light in my ports and noticed they were filled with dust. Tomorrow I plan on using pressurized air to clean it out and the problem should be solved. If you plug in your phone and unplug it and it stops the issue, I recommend cleaning your phone thoroughly.\n\n\n\n### Other Answer\nI simply changed the power supply and problem was gone. The block that plugs in the wall not the sub cord. The cord was fine but not the block.\n\n\n\n### Other Answer\nSomeone ask! I purchased my iPhone 6s Plus for 1000 from at&t. Before I touched it, I purchased a otter defender case. I read where at&t said touch desease was due to dropping! Bullshit!! I am 56 I have | https://python.langchain.com/docs/integrations/document_loaders/ifixit |
9f6bd6346cc5-6 | said touch desease was due to dropping! Bullshit!! I am 56 I have never dropped it!! Looks brand new! Never dropped or abused any way! I have my original charger. I am going to clean it and try everyone’s advice. It really sucks! I had 40,000,000 on my heart of Vegas slots! I play every day. I would be spinning and my fingers were no where max buttons and it would light up and switch to max. It did it 3 times before I caught it light up by its self. It sucks. Hope I can fix it!!!!\n\n\n\n### Other Answer\nNo answer, but same problem with iPhone 6 plus--random, self-generated jumping amongst apps and typing on its own--plus freezing regularly (aha--maybe that\'s what the "plus" in "6 plus" refers to?). An Apple Genius recommended upgrading to iOS 11.3.1 from 11.2.2, to see if that fixed the trouble. If it didn\'t, Apple will sell me a new phone for $168! Of couese the OS upgrade didn\'t fix the problem. Thanks for helping me figure out that it\'s most likely a hardware problem--which the "genius" probably knows too.\nI\'m getting ready to go Android.\n\n\n\n### Other Answer\nI experienced similar ghost touches. Two weeks ago, I changed my iPhone 6 Plus shell (I had forced the phone into it because it’s pretty tight), and also put a new glass screen protector (the edges of the protector don’t stick to the screen, weird, so I brushed pressure on the edges at times to see if they may smooth out one day miraculously). I’m not sure if I accidentally bend the phone when I installed the shell, | https://python.langchain.com/docs/integrations/document_loaders/ifixit |
9f6bd6346cc5-7 | I’m not sure if I accidentally bend the phone when I installed the shell, or, if I got a defective glass protector that messes up the touch sensor. Well, yesterday was the worse day, keeps dropping calls and ghost pressing keys for me when I was on a call. I got fed up, so I removed the screen protector, and so far problems have not reoccurred yet. I’m crossing my fingers that problems indeed solved.\n\n\n\n### Other Answer\nthank you so much for this post! i was struggling doing the reset because i cannot type userids and passwords correctly because the iphone 6 plus i have kept on typing letters incorrectly. I have been doing it for a day until i come across this article. Very helpful! God bless you!!\n\n\n\n### Other Answer\nI just turned it off, and turned it back on.\n\n\n\n### Other Answer\nMy problem has not gone away completely but its better now i changed my charger and turned off prediction ....,,,now it rarely happens\n\n\n\n### Other Answer\nI tried all of the above. I then turned off my home cleaned it with isopropyl alcohol 90%. Then I baked it in my oven on warm for an hour and a half over foil. Took it out and set it cool completely on the glass top stove. Then I turned on and it worked.\n\n\n\n### Other Answer\nI think at& t should man up and fix your phone for free! You pay a lot for a Apple they should back it. I did the next 30 month payments and finally have it paid off in June. My iPad sept. Looking forward to a almost 100 drop in my phone bill! Now this crap!!! Really\n\n\n\n### Other Answer\nIf your phone is JailBroken, suggest downloading a virus. While all my symptoms were similar, there was indeed | https://python.langchain.com/docs/integrations/document_loaders/ifixit |
9f6bd6346cc5-8 | is JailBroken, suggest downloading a virus. While all my symptoms were similar, there was indeed a virus/malware on the phone which allowed for remote control of my iphone (even while in lock mode). My mistake for buying a third party iphone i suppose. Anyway i have since had the phone restored to factory and everything is working as expected for now. I will of course keep you posted if this changes. Thanks to all for the helpful posts, really helped me narrow a few things down.\n\n\n\n### Other Answer\nWhen my phone was doing this, it ended up being the screen protector that i got from 5 below. I took it off and it stopped. I ordered more protectors from amazon and replaced it\n\n\n\n### Other Answer\niPhone 6 Plus first generation….I had the same issues as all above, apps opening by themselves, self typing, ultra sensitive screen, items jumping around all over….it even called someone on FaceTime twice by itself when I was not in the room…..I thought the phone was toast and i’d have to buy a new one took me a while to figure out but it was the extra cheap block plug I bought at a dollar store for convenience of an extra charging station when I move around the house from den to living room…..cord was fine but bought a new Apple brand block plug…no more problems works just fine now. This issue was a recent event so had to narrow things down to what had changed recently to my phone so I could figure it out.\nI even had the same problem on a laptop with documents opening up by themselves…..a laptop that was plugged in to the same wall plug as my phone charger with the dollar store block plug….until I changed the block plug.\n\n\n\n### Other Answer\nHad the problem: Inherited a | https://python.langchain.com/docs/integrations/document_loaders/ifixit |
9f6bd6346cc5-9 | changed the block plug.\n\n\n\n### Other Answer\nHad the problem: Inherited a 6s Plus from my wife. She had no problem with it.\nLooks like it was merely the cheap phone case I purchased on Amazon. It was either pinching the edges or torquing the screen/body of the phone. Problem solved.\n\n\n\n### Other Answer\nI bought my phone on march 6 and it was a brand new, but It sucks me uo because it freezing, shaking and control by itself. I went to the store where I bought this and I told them to replacr it, but they told me I have to pay it because Its about lcd issue. Please help me what other ways to fix it. Or should I try to remove the screen or should I follow your step above.\n\n\n\n### Other Answer\nI tried everything and it seems to come back to needing the original iPhone cable…or at least another 1 that would have come with another iPhone…not the $5 Store fast charging cables. My original cable is pretty beat up - like most that I see - but I’ve been beaten up much MUCH less by sticking with its use! I didn’t find that the casing/shell around it or not made any diff.\n\n\n\n### Other Answer\ngreat now I have to wait one more hour to reset my phone and while I was tryin to connect my phone to my computer the computer also restarted smh does anyone else knows how I can get my phone to work… my problem is I have a black dot on the bottom left of my screen an it wont allow me to touch a certain part of my screen unless I rotate my phone and I know the password but the first number is a 2 and it won\'t let me touch 1,2, or 3 so now I have to find a way to get | https://python.langchain.com/docs/integrations/document_loaders/ifixit |
9f6bd6346cc5-10 | me touch 1,2, or 3 so now I have to find a way to get rid of my password and all of a sudden my phone wants to touch stuff on its own which got my phone disabled many times to the point where I have to wait a whole hour and I really need to finish something on my phone today PLEASE HELPPPP\n\n\n\n### Other Answer\nIn my case , iphone 6 screen was faulty. I got it replaced at local repair shop, so far phone is working fine.\n\n\n\n### Other Answer\nthis problem in iphone 6 has many different scenarios and solutions, first try to reconnect the lcd screen to the motherboard again, if didnt solve, try to replace the lcd connector on the motherboard, if not solved, then remains two issues, lcd screen it self or touch IC. in my country some repair shops just change them all for almost 40$ since they dont want to troubleshoot one by one. readers of this comment also should know that partial screen not responding in other iphone models might also have an issue in LCD connector on the motherboard, specially if you lock/unlock screen and screen works again for sometime. lcd connectors gets disconnected lightly from the motherboard due to multiple falls and hits after sometime. best of luck for all\n\n\n\n### Other Answer\nI am facing the same issue whereby these ghost touches type and open apps , I am using an original Iphone cable , how to I fix this issue.\n\n\n\n### Other Answer\nThere were two issues with the phone I had troubles with. It was my dads and turns out he carried it in his pocket. The phone itself had a little bend in it as a result. A little pressure in the opposite direction helped the issue. But it also had a tiny crack in the screen which wasnt obvious, once we added a screen protector this fixed the issues entirely.\n\n\n\n### Other Answer\nI had the same problem with my | https://python.langchain.com/docs/integrations/document_loaders/ifixit |
9f6bd6346cc5-11 | fixed the issues entirely.\n\n\n\n### Other Answer\nI had the same problem with my 64Gb iPhone 6+. Tried a lot of things and eventually downloaded all my images and videos to my PC and restarted the phone - problem solved. Been working now for two days.', lookup_str='', metadata={'source': 'https://www.ifixit.com/Answers/View/318583/My+iPhone+6+is+typing+and+opening+apps+by+itself', 'title': 'My iPhone 6 is typing and opening apps by itself'}, lookup_index=0)]loader = IFixitLoader("https://www.ifixit.com/Device/Standard_iPad")data = loader.load()data [Document(page_content="Standard iPad\nThe standard edition of the tablet computer made by Apple.\n== Background Information ==\n\nOriginally introduced in January 2010, the iPad is Apple's standard edition of their tablet computer. In total, there have been ten generations of the standard edition of the iPad.\n\n== Additional Information ==\n\n* [link|https://www.apple.com/ipad-select/|Official Apple Product Page]\n* [link|https://en.wikipedia.org/wiki/IPad#iPad|Official iPad Wikipedia]", lookup_str='', metadata={'source': 'https://www.ifixit.com/Device/Standard_iPad', 'title': 'Standard iPad'}, lookup_index=0)]Searching iFixit using /suggest​If you're looking for a more general way to search iFixit based on a keyword or phrase, the /suggest endpoint will return content related to the search term, then the loader will load the content from each of the suggested items and prep and return the documents.data = IFixitLoader.load_suggestions("Banana")data [Document(page_content='Banana\nTasty fruit. Good source of potassium. Yellow.\n== | https://python.langchain.com/docs/integrations/document_loaders/ifixit |
9f6bd6346cc5-12 | fruit. Good source of potassium. Yellow.\n== Background Information ==\n\nCommonly misspelled, this wildly popular, phone shaped fruit serves as nutrition and an obstacle to slow down vehicles racing close behind you. Also used commonly as a synonym for “crazy� or “insane�.\n\nBotanically, the banana is considered a berry, although it isn’t included in the culinary berry category containing strawberries and raspberries. Belonging to the genus Musa, the banana originated in Southeast Asia and Australia. Now largely cultivated throughout South and Central America, bananas are largely available throughout the world. They are especially valued as a staple food group in developing countries due to the banana tree’s ability to produce fruit year round.\n\nThe banana can be easily opened. Simply remove the outer yellow shell by cracking the top of the stem. Then, with the broken piece, peel downward on each side until the fruity components on the inside are exposed. Once the shell has been removed it cannot be put back together.\n\n== Technical Specifications ==\n\n* Dimensions: Variable depending on genetics of the parent tree\n* Color: Variable depending on ripeness, region, and season\n\n== Additional Information ==\n\n[link|https://en.wikipedia.org/wiki/Banana|Wiki: Banana]', lookup_str='', metadata={'source': 'https://www.ifixit.com/Device/Banana', 'title': 'Banana'}, lookup_index=0), Document(page_content="# Banana Teardown\nIn this teardown, we open a banana to see what's inside. Yellow and delicious, but most importantly, yellow.\n\n\n###Tools Required:\n\n - Fingers\n\n - Teeth\n\n - Thumbs\n\n\n###Parts Required:\n\n - None\n\n\n## Step 1\nTake one banana from the | https://python.langchain.com/docs/integrations/document_loaders/ifixit |
9f6bd6346cc5-13 | Required:\n\n - None\n\n\n## Step 1\nTake one banana from the bunch.\nDon't squeeze too hard!\n\n\n## Step 2\nHold the banana in your left hand and grip the stem between your right thumb and forefinger.\n\n\n## Step 3\nPull the stem downward until the peel splits.\n\n\n## Step 4\nInsert your thumbs into the split of the peel and pull the two sides apart.\nExpose the top of the banana. It may be slightly squished from pulling on the stem, but this will not affect the flavor.\n\n\n## Step 5\nPull open the peel, starting from your original split, and opening it along the length of the banana.\n\n\n## Step 6\nRemove fruit from peel.\n\n\n## Step 7\nEat and enjoy!\nThis is where you'll need your teeth.\nDo not choke on banana!\n", lookup_str='', metadata={'source': 'https://www.ifixit.com/Teardown/Banana+Teardown/811', 'title': 'Banana Teardown'}, lookup_index=0)]PreviousHuggingFace datasetNextImagesSearching iFixit using /suggestCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/ifixit |
f3082df5ef04-0 | Embaas | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/embaas |
f3082df5ef04-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersEmbaasOn this pageEmbaasembaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and | https://python.langchain.com/docs/integrations/document_loaders/embaas |
f3082df5ef04-2 | managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models.Prerequisites​Create a free embaas account at https://embaas.io/register and generate an API keyDocument Text Extraction API​The document text extraction API allows you to extract the text from a given document. The API supports a variety of document formats, including PDF, mp3, mp4 and more. For a full list of supported formats, check out the API docs (link below).# Set API keyembaas_api_key = "YOUR_API_KEY"# or set environment variableos.environ["EMBAAS_API_KEY"] = "YOUR_API_KEY"Using a blob (bytes)​from langchain.document_loaders.embaas import EmbaasBlobLoaderfrom langchain.document_loaders.blob_loaders import Blobblob_loader = EmbaasBlobLoader()blob = Blob.from_path("example.pdf")documents = blob_loader.load(blob)# You can also directly create embeddings with your preferred embeddings modelblob_loader = EmbaasBlobLoader(params={"model": "e5-large-v2", "should_embed": True})blob = Blob.from_path("example.pdf")documents = blob_loader.load(blob)print(documents[0]["metadata"]["embedding"])Using a file​from langchain.document_loaders.embaas import EmbaasLoaderfile_loader = EmbaasLoader(file_path="example.pdf")documents = file_loader.load()# Disable automatic text splittingfile_loader = EmbaasLoader(file_path="example.mp3", params={"should_chunk": False})documents = file_loader.load()For more detailed information about the embaas document text extraction API, please refer to the official embaas API documentation.PreviousEmailNextEPubPrerequisitesDocument Text Extraction APICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © | https://python.langchain.com/docs/integrations/document_loaders/embaas |
f3082df5ef04-3 | Text Extraction APICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/embaas |
bedc95270f1b-0 | CoNLL-U | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/conll-u |
bedc95270f1b-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersCoNLL-UCoNLL-UCoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files | https://python.langchain.com/docs/integrations/document_loaders/conll-u |
bedc95270f1b-2 | is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:Word lines containing the annotation of a word/token in 10 fields separated by single tab characters; see below.Blank lines marking sentence boundaries.Comment lines starting with hash (#).This is an example of how to load a file in CoNLL-U format. The whole file is treated as one document. The example data (conllu.conllu) is based on one of the standard UD/CoNLL-U examples.from langchain.document_loaders import CoNLLULoaderloader = CoNLLULoader("example_data/conllu.conllu")document = loader.load()document [Document(page_content='They buy and sell books.', metadata={'source': 'example_data/conllu.conllu'})]PreviousConfluenceNextCopy PasteCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/conll-u |
08ba7283b009-0 | Loading documents from a YouTube url | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/youtube_audio |
08ba7283b009-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersLoading documents from a YouTube urlOn this pageLoading documents from a YouTube urlBuilding chat or QA applications on YouTube videos is a topic of high interest.Below we show how to | https://python.langchain.com/docs/integrations/document_loaders/youtube_audio |
08ba7283b009-2 | chat or QA applications on YouTube videos is a topic of high interest.Below we show how to easily go from a YouTube url to text to chat!We wil use the OpenAIWhisperParser, which will use the OpenAI Whisper API to transcribe audio to text.Note: You will need to have an OPENAI_API_KEY supplied.from langchain.document_loaders.generic import GenericLoaderfrom langchain.document_loaders.parsers import OpenAIWhisperParserfrom langchain.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoaderWe will use yt_dlp to download audio for YouTube urls.We will use pydub to split downloaded audio files (such that we adhere to Whisper API's 25MB file size limit).pip install yt_dlp pip install pydubYouTube url to text​Use YoutubeAudioLoader to fetch / download the audio files.Then, ues OpenAIWhisperParser() to transcribe them to text.Let's take the first lecture of Andrej Karpathy's YouTube course as an example! # Two Karpathy lecture videosurls = ["https://youtu.be/kCc8FmEb1nY", "https://youtu.be/VMj-3S1tku0"]# Directory to save audio filessave_dir = "~/Downloads/YouTube"# Transcribe the videos to textloader = GenericLoader(YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParser())docs = loader.load() [youtube] Extracting URL: https://youtu.be/kCc8FmEb1nY [youtube] kCc8FmEb1nY: Downloading webpage [youtube] kCc8FmEb1nY: Downloading android player API JSON [info] kCc8FmEb1nY: Downloading 1 format(s): 140 [dashsegments] Total fragments: 11 | https://python.langchain.com/docs/integrations/document_loaders/youtube_audio |
08ba7283b009-3 | format(s): 140 [dashsegments] Total fragments: 11 [download] Destination: /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/Let's build GPT: from scratch, in code, spelled out..m4a [download] 100% of 107.73MiB in 00:00:18 at 5.92MiB/s [FixupM4a] Correcting container of "/Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/Let's build GPT: from scratch, in code, spelled out..m4a" [ExtractAudio] Not converting audio /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/Let's build GPT: from scratch, in code, spelled out..m4a; file is already in target format m4a [youtube] Extracting URL: https://youtu.be/VMj-3S1tku0 [youtube] VMj-3S1tku0: Downloading webpage [youtube] VMj-3S1tku0: Downloading android player API JSON [info] VMj-3S1tku0: Downloading 1 format(s): 140 [download] /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/The spelled-out intro to neural networks and backpropagation: building micrograd.m4a has already been downloaded [download] 100% of 134.98MiB | https://python.langchain.com/docs/integrations/document_loaders/youtube_audio |
08ba7283b009-4 | [download] 100% of 134.98MiB [ExtractAudio] Not converting audio /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/The spelled-out intro to neural networks and backpropagation: building micrograd.m4a; file is already in target format m4a# Returns a list of Documents, which can be easily viewed or parseddocs[0].page_content[0:500] "Hello, my name is Andrej and I've been training deep neural networks for a bit more than a decade. And in this lecture I'd like to show you what neural network training looks like under the hood. So in particular we are going to start with a blank Jupyter notebook and by the end of this lecture we will define and train a neural net and you'll get to see everything that goes on under the hood and exactly sort of how that works on an intuitive level. Now specifically what I would like to do is I w"Building a chat app from YouTube video​Given Documents, we can easily enable chat / question+answering.from langchain.chains import RetrievalQAfrom langchain.vectorstores import FAISSfrom langchain.chat_models import ChatOpenAIfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.text_splitter import RecursiveCharacterTextSplitter# Combine doccombined_docs = [doc.page_content for doc in docs]text = " ".join(combined_docs)# Split themtext_splitter = RecursiveCharacterTextSplitter(chunk_size=1500, chunk_overlap=150)splits = text_splitter.split_text(text)# Build an indexembeddings = OpenAIEmbeddings()vectordb = FAISS.from_texts(splits, embeddings)# Build a QA chainqa_chain = RetrievalQA.from_chain_type( | https://python.langchain.com/docs/integrations/document_loaders/youtube_audio |
08ba7283b009-5 | embeddings)# Build a QA chainqa_chain = RetrievalQA.from_chain_type( llm=ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0), chain_type="stuff", retriever=vectordb.as_retriever(),)# Ask a question!query = "Why do we need to zero out the gradient before backprop at each step?"qa_chain.run(query) "We need to zero out the gradient before backprop at each step because the backward pass accumulates gradients in the grad attribute of each parameter. If we don't reset the grad to zero before each backward pass, the gradients will accumulate and add up, leading to incorrect updates and slower convergence. By resetting the grad to zero before each backward pass, we ensure that the gradients are calculated correctly and that the optimization process works as intended."query = "What is the difference between an encoder and decoder?"qa_chain.run(query) 'In the context of transformers, an encoder is a component that reads in a sequence of input tokens and generates a sequence of hidden representations. On the other hand, a decoder is a component that takes in a sequence of hidden representations and generates a sequence of output tokens. The main difference between the two is that the encoder is used to encode the input sequence into a fixed-length representation, while the decoder is used to decode the fixed-length representation into an output sequence. In machine translation, for example, the encoder reads in the source language sentence and generates a fixed-length representation, which is then used by the decoder to generate the target language sentence.'query = "For any token, what are x, k, v, and q?"qa_chain.run(query) 'For any token, x is the input vector that contains the private information of that token, k and q are the key and query vectors respectively, which are produced by forwarding linear modules on x, and | https://python.langchain.com/docs/integrations/document_loaders/youtube_audio |
08ba7283b009-6 | q are the key and query vectors respectively, which are produced by forwarding linear modules on x, and v is the vector that is calculated by propagating the same linear module on x again. The key vector represents what the token contains, and the query vector represents what the token is looking for. The vector v is the information that the token will communicate to other tokens if it finds them interesting, and it gets aggregated for the purposes of the self-attention mechanism.'PreviousXorbits Pandas DataFrameNextYouTube transcriptsYouTube url to textBuilding a chat app from YouTube videoCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/youtube_audio |
45b8aed6bb54-0 | College Confidential | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/college_confidential |
45b8aed6bb54-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersCollege ConfidentialCollege ConfidentialCollege Confidential gives information on 3,800+ colleges and universities.This covers how to load College Confidential webpages into a document format that we can use | https://python.langchain.com/docs/integrations/document_loaders/college_confidential |
45b8aed6bb54-2 | colleges and universities.This covers how to load College Confidential webpages into a document format that we can use downstream.from langchain.document_loaders import CollegeConfidentialLoaderloader = CollegeConfidentialLoader( "https://www.collegeconfidential.com/colleges/brown-university/")data = loader.load()data [Document(page_content='\n\n\n\n\n\n\n\nA68FEB02-9D19-447C-B8BC-818149FD6EAF\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Media (2)\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\n\n\n\n\n\n\n\n\n\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAbout Brown\n\n\n\n\n\n\nBrown University Overview\nBrown University is a private, nonprofit school in the urban setting of Providence, Rhode Island. Brown was founded in 1764 and the school currently enrolls around 10,696 students a year, including 7,349 undergraduates. Brown provides on-campus housing for students. Most students live in off campus housing.\n📆 Mark your calendar! January 5, 2023 is the final deadline to submit an application for the Fall 2023 semester. \nThere are many ways for students to get involved at Brown! \nLove music or performing? Join a campus band, sing in a chorus, or perform | https://python.langchain.com/docs/integrations/document_loaders/college_confidential |
45b8aed6bb54-3 | \nLove music or performing? Join a campus band, sing in a chorus, or perform with one of the school\'s theater groups.\nInterested in journalism or communications? Brown students can write for the campus newspaper, host a radio show or be a producer for the student-run television channel.\nInterested in joining a fraternity or sorority? Brown has fraternities and sororities.\nPlanning to play sports? Brown has many options for athletes. See them all and learn more about life at Brown on the Student Life page.\n\n\n\n2022 Brown Facts At-A-Glance\n\n\n\n\n\nAcademic Calendar\nOther\n\n\nOverall Acceptance Rate\n6%\n\n\nEarly Decision Acceptance Rate\n16%\n\n\nEarly Action Acceptance Rate\nEA not offered\n\n\nApplicants Submitting SAT scores\n51%\n\n\nTuition\n$62,680\n\n\nPercent of Need Met\n100%\n\n\nAverage First-Year Financial Aid Package\n$59,749\n\n\n\n\nIs Brown a Good School?\n\nDifferent people have different ideas about what makes a "good" school. Some factors that can help you determine what a good school for you might be include admissions criteria, acceptance rate, tuition costs, and more.\nLet\'s take a look at these factors to get a clearer sense of what Brown offers and if it could be the right college for you.\nBrown Acceptance Rate 2022\nIt is extremely difficult to get into Brown. Around 6% of applicants get into Brown each year. In 2022, just 2,568 out of the 46,568 students who applied were accepted.\nRetention and Graduation Rates at Brown\nRetention refers to the number of students that stay enrolled at a school over time. This is a way to get a sense of how satisfied students are with their school experience, and if they have the support | https://python.langchain.com/docs/integrations/document_loaders/college_confidential |
45b8aed6bb54-4 | to get a sense of how satisfied students are with their school experience, and if they have the support necessary to succeed in college. \nApproximately 98% of first-year, full-time undergrads who start at Browncome back their sophomore year. 95% of Brown undergrads graduate within six years. The average six-year graduation rate for U.S. colleges and universities is 61% for public schools, and 67% for private, non-profit schools.\nJob Outcomes for Brown Grads\nJob placement stats are a good resource for understanding the value of a degree from Brown by providing a look on how job placement has gone for other grads. \nCheck with Brown directly, for information on any information on starting salaries for recent grads.\nBrown\'s Endowment\nAn endowment is the total value of a school\'s investments, donations, and assets. Endowment is not necessarily an indicator of the quality of a school, but it can give you a sense of how much money a college can afford to invest in expanding programs, improving facilities, and support students. \nAs of 2022, the total market value of Brown University\'s endowment was $4.7 billion. The average college endowment was $905 million in 2021. The school spends $34,086 for each full-time student enrolled. \nTuition and Financial Aid at Brown\nTuition is another important factor when choose a college. Some colleges may have high tuition, but do a better job at meeting students\' financial need.\nBrown meets 100% of the demonstrated financial need for undergraduates. The average financial aid package for a full-time, first-year student is around $59,749 a year. \nThe average student debt for graduates in the class of 2022 was around $24,102 per student, not including those with no debt. For context, compare this number with the average national debt, which is around | https://python.langchain.com/docs/integrations/document_loaders/college_confidential |
45b8aed6bb54-5 | those with no debt. For context, compare this number with the average national debt, which is around $36,000 per borrower. \nThe 2023-2024 FAFSA Opened on October 1st, 2022\nSome financial aid is awarded on a first-come, first-served basis, so fill out the FAFSA as soon as you can. Visit the FAFSA website to apply for student aid. Remember, the first F in FAFSA stands for FREE! You should never have to pay to submit the Free Application for Federal Student Aid (FAFSA), so be very wary of anyone asking you for money.\nLearn more about Tuition and Financial Aid at Brown.\nBased on this information, does Brown seem like a good fit? Remember, a school that is perfect for one person may be a terrible fit for someone else! So ask yourself: Is Brown a good school for you?\nIf Brown University seems like a school you want to apply to, click the heart button to save it to your college list.\n\nStill Exploring Schools?\nChoose one of the options below to learn more about Brown:\nAdmissions\nStudent Life\nAcademics\nTuition & Aid\nBrown Community Forums\nThen use the college admissions predictor to take a data science look at your chances of getting into some of the best colleges and universities in the U.S.\nWhere is Brown?\nBrown is located in the urban setting of Providence, Rhode Island, less than an hour from Boston. \nIf you would like to see Brown for yourself, plan a visit. The best way to reach campus is to take Interstate 95 to Providence, or book a flight to the nearest airport, T.F. Green.\nYou can also take a virtual campus tour to get a sense of what Brown and Providence are like without leaving home.\nConsidering Going to School in Rhode Island?\nSee a full list of colleges | https://python.langchain.com/docs/integrations/document_loaders/college_confidential |
45b8aed6bb54-6 | without leaving home.\nConsidering Going to School in Rhode Island?\nSee a full list of colleges in Rhode Island and save your favorites to your college list.\n\n\n\nCollege Info\n\n\n\n\n\n\n\n\n\n Providence, RI 02912\n \n\n\n\n Campus Setting: Urban\n \n\n\n\n\n\n\n\n (401) 863-2378\n \n\n Website\n \n\n Virtual Tour\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nBrown Application Deadline\n\n\n\nFirst-Year Applications are Due\n\nJan 5\n\nTransfer Applications are Due\n\nMar 1\n\n\n\n \n The deadline for Fall first-year applications to Brown is \n | https://python.langchain.com/docs/integrations/document_loaders/college_confidential |
45b8aed6bb54-7 | Fall first-year applications to Brown is \n Jan 5. \n \n \n \n\n \n The deadline for Fall transfer applications to Brown is \n Mar 1. \n \n \n \n\n \n Check the school website \n for more information about deadlines for specific programs or special admissions programs\n \n \n\n\n\n\n\n\nBrown ACT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nACT Range\n\n\n \n 33 - 35\n \n \n\n\n\nEstimated Chance of Acceptance by ACT Score\n\n\nACT Score\nEstimated Chance\n\n\n35 and Above\nGood\n\n\n33 to | https://python.langchain.com/docs/integrations/document_loaders/college_confidential |
45b8aed6bb54-8 | Score\nEstimated Chance\n\n\n35 and Above\nGood\n\n\n33 to 35\nAvg\n\n\n33 and Less\nLow\n\n\n\n\n\n\nStand out on your college application\n\n• Qualify for scholarships\n• Most students who retest improve their score\n\nSponsored by ACT\n\n\n Take the Next ACT Test\n \n\n\n\n\n\nBrown SAT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nComposite SAT Range\n\n\n \n 720 - 770\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nMath SAT Range\n\n\n \n Not available\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nReading SAT Range\n\n\n \n 740 - 800\n | https://python.langchain.com/docs/integrations/document_loaders/college_confidential |
45b8aed6bb54-9 | 740 - 800\n \n \n\n\n\n\n\n\n Brown Tuition & Fees\n \n\n\n\nTuition & Fees\n\n\n\n $82,286\n \nIn State\n\n\n\n\n $82,286\n \nOut-of-State\n\n\n\n\n\n\n\nCost Breakdown\n\n\nIn State\n\n\nOut-of-State\n\n\n\n\nState Tuition\n\n\n\n $62,680\n \n\n\n\n $62,680\n \n\n\n\n\nFees\n\n\n\n $2,466\n | https://python.langchain.com/docs/integrations/document_loaders/college_confidential |
45b8aed6bb54-10 | $2,466\n \n\n\n\n $2,466\n \n\n\n\n\nHousing\n\n\n\n $15,840\n \n\n\n\n $15,840\n \n\n\n\n\nBooks\n\n\n\n $1,300\n \n\n\n\n $1,300\n \n\n\n\n\n\n Total (Before Financial Aid):\n | https://python.langchain.com/docs/integrations/document_loaders/college_confidential |
45b8aed6bb54-11 | Financial Aid):\n \n\n\n\n $82,286\n \n\n\n\n $82,286\n \n\n\n\n\n\n\n\n\n\n\n\nStudent Life\n\n Wondering what life at Brown is like? There are approximately \n 10,696 students enrolled at \n Brown, \n including 7,349 undergraduate students and \n 3,347 graduate students.\n 96% percent of students attend school \n full-time, \n 6% percent are from RI and \n 94% percent of students are from other states.\n \n\n\n\n\n\n None\n \n\n\n\n\nUndergraduate Enrollment\n\n\n\n | https://python.langchain.com/docs/integrations/document_loaders/college_confidential |
45b8aed6bb54-12 | 96%\n \nFull Time\n\n\n\n\n 4%\n \nPart Time\n\n\n\n\n\n\n\n 94%\n \n\n\n\n\nResidency\n\n\n\n 6%\n \nIn State\n\n\n\n\n 94%\n \nOut-of-State\n\n\n\n\n\n\n\n Data Source: IPEDs and Peterson\'s Databases © 2022 Peterson\'s LLC All rights reserved\n \n', lookup_str='', metadata={'source': 'https://www.collegeconfidential.com/colleges/brown-university/'}, lookup_index=0)]Previouschatgpt_loaderNextConfluenceCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, | https://python.langchain.com/docs/integrations/document_loaders/college_confidential |
45b8aed6bb54-13 | © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/college_confidential |
5dd46529509a-0 | Spreedly | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/spreedly |
5dd46529509a-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersSpreedlySpreedlySpreedly is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third | https://python.langchain.com/docs/integrations/document_loaders/spreedly |
5dd46529509a-2 | to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at Spreedly, allowing you to independently store a card and then pass that card to different end points based on your business requirements.This notebook covers how to load data from the Spreedly REST API into a format that can be ingested into LangChain, along with example usage for vectorization.Note: this notebook assumes the following packages are installed: openai, chromadb, and tiktoken.import osfrom langchain.document_loaders import SpreedlyLoaderfrom langchain.indexes import VectorstoreIndexCreatorSpreedly API requires an access token, which can be found inside the Spreedly Admin Console.This document loader does not currently support pagination, nor access to more complex objects which require additional parameters. It also requires a resource option which defines what objects you want to load.Following resources are available:gateways_options: Documentationgateways: Documentationreceivers_options: Documentationreceivers: Documentationpayment_methods: Documentationcertificates: Documentationtransactions: Documentationenvironments: Documentationspreedly_loader = SpreedlyLoader( os.environ["SPREEDLY_ACCESS_TOKEN"], "gateways_options")# Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([spreedly_loader])spreedly_doc_retriever = index.vectorstore.as_retriever() Using embedded DuckDB without persistence: data will be transient# Test the retrieverspreedly_doc_retriever.get_relevant_documents("CRC") | https://python.langchain.com/docs/integrations/document_loaders/spreedly |
5dd46529509a-3 | Test the retrieverspreedly_doc_retriever.get_relevant_documents("CRC") [Document(page_content='installment_grace_period_duration\nreference_data_code\ninvoice_number\ntax_management_indicator\noriginal_amount\ninvoice_amount\nvat_tax_rate\nmobile_remote_payment_type\ngratuity_amount\nmdd_field_1\nmdd_field_2\nmdd_field_3\nmdd_field_4\nmdd_field_5\nmdd_field_6\nmdd_field_7\nmdd_field_8\nmdd_field_9\nmdd_field_10\nmdd_field_11\nmdd_field_12\nmdd_field_13\nmdd_field_14\nmdd_field_15\nmdd_field_16\nmdd_field_17\nmdd_field_18\nmdd_field_19\nmdd_field_20\nsupported_countries: US\nAE\nBR\nCA\nCN\nDK\nFI\nFR\nDE\nIN\nJP\nMX\nNO\nSE\nGB\nSG\nLB\nPK\nsupported_cardtypes: visa\nmaster\namerican_express\ndiscover\ndiners_club\njcb\ndankort\nmaestro\nelo\nregions: asia_pacific\neurope\nlatin_america\nnorth_america\nhomepage: http://www.cybersource.com\ndisplay_api_url: https://ics2wsa.ic3.com/commerce/1.x/transactionProcessor\ncompany_name: CyberSource', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}), | https://python.langchain.com/docs/integrations/document_loaders/spreedly |
5dd46529509a-4 | Document(page_content='BG\nBH\nBI\nBJ\nBM\nBN\nBO\nBR\nBS\nBT\nBW\nBY\nBZ\nCA\nCC\nCF\nCH\nCK\nCL\nCM\nCN\nCO\nCR\nCV\nCX\nCY\nCZ\nDE\nDJ\nDK\nDO\nDZ\nEC\nEE\nEG\nEH\nES\nET\nFI\nFJ\nFK\nFM\nFO\nFR\nGA\nGB\nGD\nGE\nGF\nGG\nGH\nGI\nGL\nGM\nGN\nGP\nGQ\nGR\nGT\nGU\nGW\nGY\nHK\nHM\nHN\nHR\nHT\nHU\nID\nIE\nIL\nIM\nIN\nIO\nIS\nIT\nJE\nJM\nJO\nJP\nKE\nKG\nKH\nKI\nKM\nKN\nKR\nKW\nKY\nKZ\nLA\nLC\nLI\nLK\n | https://python.langchain.com/docs/integrations/document_loaders/spreedly |
5dd46529509a-5 | KZ\nLA\nLC\nLI\nLK\nLS\nLT\nLU\nLV\nMA\nMC\nMD\nME\nMG\nMH\nMK\nML\nMN\nMO\nMP\nMQ\nMR\nMS\nMT\nMU\nMV\nMW\nMX\nMY\nMZ\nNA\nNC\nNE\nNF\nNG\nNI\nNL\nNO\nNP\nNR\nNU\nNZ\nOM\nPA\nPE\nPF\nPH\nPK\nPL\nPN\nPR\nPT\nPW\nPY\nQA\nRE\nRO\nRS\nRU\nRW\nSA\nSB\nSC\nSE\nSG\nSI\nSK\nSL\nSM\nSN\nST\nSV\nSZ\nTC\nTD\nTF\nTG\nTH\nTJ\nTK\nTM\nTO\nTR\nTT\nTV\nTW\nTZ\nUA\nUG\nUS\nUY\nUZ\nVA\nVC\nVE\nVI\nVN\nVU\nWF\nWS\n | https://python.langchain.com/docs/integrations/document_loaders/spreedly |
5dd46529509a-6 | VI\nVN\nVU\nWF\nWS\nYE\nYT\nZA\nZM\nsupported_cardtypes: | https://python.langchain.com/docs/integrations/document_loaders/spreedly |
5dd46529509a-7 | visa\nmaster\namerican_express\ndiscover\njcb\nmaestro\nelo\nnaranja\ncabal\nunionpay\nregions: asia_pacific\neurope\nmiddle_east\nnorth_america\nhomepage: http://worldpay.com\ndisplay_api_url: https://secure.worldpay.com/jsp/merchant/xml/paymentService.jsp\ncompany_name: WorldPay', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}), Document(page_content='gateway_specific_fields: receipt_email\nradar_session_id\nskip_radar_rules\napplication_fee\nstripe_account\nmetadata\nidempotency_key\nreason\nrefund_application_fee\nrefund_fee_amount\nreverse_transfer\naccount_id\ncustomer_id\nvalidate\nmake_default\ncancellation_reason\ncapture_method\nconfirm\nconfirmation_method\ncustomer\ndescription\nmoto\noff_session\non_behalf_of\npayment_method_types\nreturn_email\nreturn_url\nsave_payment_method\nsetup_future_usage\nstatement_descriptor\nstatement_descriptor_suffix\ntransfer_amount\ntransfer_destination\ntransfer_group\napplication_fee_amount\nrequest_three_d_secure\nerror_on_requires_action\nnetwork_transaction_id\nclaim_without_transaction_id\nfulfillment_date\nevent_type\nmodal_challenge\nidempotent_request\nmerchant_reference\ncustomer_reference\nshipping_address_zip\nshipping_from_zip\nshipping_amount\nline_items\nsupported_countries: AE\nAT\nAU\nBE\nBG\nBR\nCA\nCH\nCY\nCZ\nDE\nDK\nEE\nES\nFI\nFR\nGB\nGR\nHK\nHU\nIE\nIN\nIT\nJP\nLT\nLU\nLV\nMT\nMX\nMY\nNL\nNO\nNZ\nPL\nPT\nRO\nSE\nSG\nSI\nSK\nUS\nsupported_cardtypes: visa', metadata={'source': | https://python.langchain.com/docs/integrations/document_loaders/spreedly |
5dd46529509a-8 | visa', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}), Document(page_content='mdd_field_57\nmdd_field_58\nmdd_field_59\nmdd_field_60\nmdd_field_61\nmdd_field_62\nmdd_field_63\nmdd_field_64\nmdd_field_65\nmdd_field_66\nmdd_field_67\nmdd_field_68\nmdd_field_69\nmdd_field_70\nmdd_field_71\nmdd_field_72\nmdd_field_73\nmdd_field_74\nmdd_field_75\nmdd_field_76\nmdd_field_77\nmdd_field_78\nmdd_field_79\nmdd_field_80\nmdd_field_81\nmdd_field_82\nmdd_field_83\nmdd_field_84\nmdd_field_85\nmdd_field_86\nmdd_field_87\nmdd_field_88\nmdd_field_89\nmdd_field_90\nmdd_field_91\nmdd_field_92\nmdd_field_93\nmdd_field_94\nmdd_field_95\nmdd_field_96\nmdd_field_97\nmdd_field_98\nmdd_field_99\nmdd_field_100\nsupported_countries: US\nAE\nBR\nCA\nCN\nDK\nFI\nFR\nDE\nIN\nJP\nMX\nNO\nSE\nGB\nSG\nLB\nPK\nsupported_cardtypes: visa\nmaster\namerican_express\ndiscover\ndiners_club\njcb\nmaestro\nelo\nunion_pay\ncartes_bancaires\nmada\nregions: asia_pacific\neurope\nlatin_america\nnorth_america\nhomepage: | https://python.langchain.com/docs/integrations/document_loaders/spreedly |
5dd46529509a-9 | asia_pacific\neurope\nlatin_america\nnorth_america\nhomepage: http://www.cybersource.com\ndisplay_api_url: https://api.cybersource.com\ncompany_name: CyberSource REST', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'})]PreviousSource CodeNextStripeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/spreedly |
576a1ad18b8d-0 | Sitemap | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersSitemapOn this pageSitemapExtends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-2 | SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.The scraping is done concurrently. There are reasonable limits to concurrent requests, defaulting to 2 per second. If you aren't concerned about being a good citizen, or you control the scrapped server, or don't care about load. Note, while this will speed up the scraping process, but it may cause the server to block you. Be careful!pip install nest_asyncio Requirement already satisfied: nest_asyncio in /Users/tasp/Code/projects/langchain/.venv/lib/python3.10/site-packages (1.5.6) [notice] A new release of pip available: 22.3.1 -> 23.0.1 [notice] To update, run: pip install --upgrade pip# fixes a bug with asyncio and jupyterimport nest_asyncionest_asyncio.apply()from langchain.document_loaders.sitemap import SitemapLoadersitemap_loader = SitemapLoader(web_path="https://langchain.readthedocs.io/sitemap.xml")docs = sitemap_loader.load()You can change the requests_per_second parameter to increase the max concurrent requests. and use requests_kwargs to pass kwargs when send requests.sitemap_loader.requests_per_second = 2# Optional: avoid `[SSL: CERTIFICATE_VERIFY_FAILED]` issuesitemap_loader.requests_kwargs = {"verify": False}docs[0] Document(page_content='\n\n\n\n\n\nWelcome to LangChain — 🦜🔗 LangChain 0.0.123\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSkip to | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-3 | to main content\n\n\n\n\n\n\n\n\n\n\nCtrl+K\n\n\n\n\n\n\n\n\n\n\n\n\n🦜🔗 LangChain 0.0.123\n\n\n\nGetting Started\n\nQuickstart Guide\n\nModules\n\nPrompt Templates\nGetting Started\nKey Concepts\nHow-To Guides\nCreate a custom prompt template\nCreate a custom example selector\nProvide few shot examples to a prompt\nPrompt Serialization\nExample Selectors\nOutput Parsers\n\n\nReference\nPromptTemplates\nExample Selector\n\n\n\n\nLLMs\nGetting Started\nKey Concepts\nHow-To Guides\nGeneric Functionality\nCustom LLM\nFake LLM\nLLM Caching\nLLM Serialization\nToken Usage Tracking\n\n\nIntegrations\nAI21\nAleph Alpha\nAnthropic\nAzure OpenAI LLM Example\nBanana\nCerebriumAI LLM Example\nCohere\nDeepInfra LLM Example\nForefrontAI LLM Example\nGooseAI LLM Example\nHugging Face Hub\nManifest\nModal\nOpenAI\nPetals LLM Example\nPromptLayer OpenAI\nSageMakerEndpoint\nSelf-Hosted Models via Runhouse\nStochasticAI\nWriter\n\n\nAsync API for LLM\nStreaming with LLMs\n\n\nReference\n\n\nDocument Loaders\nKey Concepts\nHow To Guides\nCoNLL-U\nAirbyte JSON\nAZLyrics\nBlackboard\nCollege Confidential\nCopy Paste\nCSV Loader\nDirectory Loader\nEmail\nEverNote\nFacebook Chat\nFigma\nGCS Directory\nGCS File Storage\nGitBook\nGoogle Drive\nGutenberg\nHacker News\nHTML\niFixit\nImages\nIMSDb\nMarkdown\nNotebook\nNotion\nObsidian\nPDF\nPowerPoint\nReadTheDocs Documentation\nRoam\ns3 Directory\ns3 File\nSubtitle | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-4 | Documentation\nRoam\ns3 Directory\ns3 File\nSubtitle Files\nTelegram\nUnstructured File Loader\nURL\nWeb Base\nWord Documents\nYouTube\n\n\n\n\nUtils\nKey Concepts\nGeneric Utilities\nBash\nBing Search\nGoogle Search\nGoogle Serper API\nIFTTT WebHooks\nPython REPL\nRequests\nSearxNG Search API\nSerpAPI\nWolfram Alpha\nZapier Natural Language Actions API\n\n\nReference\nPython REPL\nSerpAPI\nSearxNG Search\nDocstore\nText Splitter\nEmbeddings\nVectorStores\n\n\n\n\nIndexes\nGetting Started\nKey Concepts\nHow To Guides\nEmbeddings\nHypothetical Document Embeddings\nText Splitter\nVectorStores\nAtlasDB\nChroma\nDeep Lake\nElasticSearch\nFAISS\nMilvus\nOpenSearch\nPGVector\nPinecone\nQdrant\nRedis\nWeaviate\nChatGPT Plugin Retriever\nVectorStore Retriever\nAnalyze Document\nChat Index\nGraph QA\nQuestion Answering with Sources\nQuestion Answering\nSummarization\nRetrieval Question/Answering\nRetrieval Question Answering with Sources\nVector DB Text Generation\n\n\n\n\nChains\nGetting Started\nHow-To Guides\nGeneric Chains\nLoading from LangChainHub\nLLM Chain\nSequential Chains\nSerialization\nTransformation Chain\n\n\nUtility Chains\nAPI Chains\nSelf-Critique Chain with Constitutional AI\nBashChain\nLLMCheckerChain\nLLM Math\nLLMRequestsChain\nLLMSummarizationCheckerChain\nModeration\nPAL\nSQLite example\n\n\nAsync API for Chain\n\n\nKey Concepts\nReference\n\n\nAgents\nGetting Started\nKey Concepts\nHow-To Guides\nAgents and Vectorstores\nAsync API for Agent\nConversation Agent (for Chat Models)\nChatGPT Plugins\nCustom Agent\nDefining | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-5 | Agent\nConversation Agent (for Chat Models)\nChatGPT Plugins\nCustom Agent\nDefining Custom Tools\nHuman as a tool\nIntermediate Steps\nLoading from LangChainHub\nMax Iterations\nMulti Input Tools\nSearch Tools\nSerialization\nAdding SharedMemory to an Agent and its Tools\nCSV Agent\nJSON Agent\nOpenAPI Agent\nPandas Dataframe Agent\nPython Agent\nSQL Database Agent\nVectorstore Agent\nMRKL\nMRKL Chat\nReAct\nSelf Ask With Search\n\n\nReference\n\n\nMemory\nGetting Started\nKey Concepts\nHow-To Guides\nConversationBufferMemory\nConversationBufferWindowMemory\nEntity Memory\nConversation Knowledge Graph Memory\nConversationSummaryMemory\nConversationSummaryBufferMemory\nConversationTokenBufferMemory\nAdding Memory To an LLMChain\nAdding Memory to a Multi-Input Chain\nAdding Memory to an Agent\nChatGPT Clone\nConversation Agent\nConversational Memory Customization\nCustom Memory\nMultiple Memory\n\n\n\n\nChat\nGetting Started\nKey Concepts\nHow-To Guides\nAgent\nChat Vector DB\nFew Shot Examples\nMemory\nPromptLayer ChatOpenAI\nStreaming\nRetrieval Question/Answering\nRetrieval Question Answering with Sources\n\n\n\n\n\nUse Cases\n\nAgents\nChatbots\nGenerate Examples\nData Augmented Generation\nQuestion Answering\nSummarization\nQuerying Tabular Data\nExtraction\nEvaluation\nAgent Benchmarking: Search + Calculator\nAgent VectorDB Question Answering Benchmarking\nBenchmarking Template\nData Augmented Question Answering\nUsing Hugging Face Datasets\nLLM Math\nQuestion Answering Benchmarking: Paul Graham Essay\nQuestion Answering Benchmarking: State of the Union Address\nQA Generation\nQuestion Answering\nSQL Question Answering Benchmarking: Chinook\n\n\nModel Comparison\n\nReference\n\nInstallation\nIntegrations\nAPI References\nPrompts\nPromptTemplates\nExample Selector\n\n\nUtilities\nPython | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-6 | References\nPrompts\nPromptTemplates\nExample Selector\n\n\nUtilities\nPython REPL\nSerpAPI\nSearxNG Search\nDocstore\nText Splitter\nEmbeddings\nVectorStores\n\n\nChains\nAgents\n\n\n\nEcosystem\n\nLangChain Ecosystem\nAI21 Labs\nAtlasDB\nBanana\nCerebriumAI\nChroma\nCohere\nDeepInfra\nDeep Lake\nForefrontAI\nGoogle Search Wrapper\nGoogle Serper Wrapper\nGooseAI\nGraphsignal\nHazy Research\nHelicone\nHugging Face\nMilvus\nModal\nNLPCloud\nOpenAI\nOpenSearch\nPetals\nPGVector\nPinecone\nPromptLayer\nQdrant\nRunhouse\nSearxNG Search API\nSerpAPI\nStochasticAI\nUnstructured\nWeights & Biases\nWeaviate\nWolfram Alpha Wrapper\nWriter\n\n\n\nAdditional Resources\n\nLangChainHub\nGlossary\nLangChain Gallery\nDeployments\nTracing\nDiscord\nProduction Support\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n.rst\n\n\n\n\n\n\n\n.pdf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWelcome to LangChain\n\n\n\n\n Contents \n\n\n\nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosystem\nAdditional Resources\n\n\n\n\n\n\n\n\nWelcome to LangChain#\nLarge language models (LLMs) are emerging as a transformative technology, enabling\ndevelopers to build applications that they previously could not.\nBut using these LLMs in isolation is often not enough to\ncreate a truly powerful app - the real power comes when you are able | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-7 | is often not enough to\ncreate a truly powerful app - the real power comes when you are able to\ncombine them with other sources of computation or knowledge.\nThis library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include:\n� Question Answering over specific documents\n\nDocumentation\nEnd-to-end Example: Question Answering over Notion Database\n\n💬 Chatbots\n\nDocumentation\nEnd-to-end Example: Chat-LangChain\n\n🤖 Agents\n\nDocumentation\nEnd-to-end Example: GPT+WolframAlpha\n\n\nGetting Started#\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\n\nGetting Started Documentation\n\n\n\n\n\nModules#\nThere are several main modules that LangChain provides support for.\nFor each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.\nThese modules are, in increasing order of complexity:\n\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\nLLMs: This includes a generic interface for all LLMs, and common utilities for working with LLMs.\nDocument Loaders: This includes a standard interface for loading documents, as well as specific integrations to all types of text data sources.\nUtils: Language models are often more powerful when interacting with other sources of knowledge or computation. This can include Python REPLs, embeddings, search engines, and more. LangChain provides a large collection of common utils to use in your application.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\nIndexes: Language models are often more | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-8 | other tools, and end-to-end chains for common applications.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nChat: Chat models are a variation on Language Models that expose a different API - rather than working with raw text, they work with messages. LangChain provides a standard interface for working with them and doing all the same things as above.\n\n\n\n\n\nUse Cases#\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\n\nAgents: Agents are systems that use a language model to interact with other tools. These can be used to do more grounded question/answering, interact with APIs, or even take actions.\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\nData Augmented Generation: Data Augmented Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources.\nQuestion Answering: Answering questions over specific documents, only utilizing the information in those documents to construct an answer. A type of Data Augmented Generation.\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-9 | Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\nGenerate similar examples: Generating similar examples to a given input. This is a common use case for many applications, and LangChain provides some prompts/chains for assisting in this.\nCompare models: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\n\n\n\n\n\nReference Docs#\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\n\nReference Documentation\n\n\n\n\n\nLangChain Ecosystem#\nGuides for how other companies/products can be used with LangChain\n\nLangChain Ecosystem\n\n\n\n\n\nAdditional Resources#\nAdditional collection of resources we think may be useful as you develop your application!\n\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nDiscord: Join us on our Discord to discuss all things | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-10 | repositories for deploying LangChain apps.\nDiscord: Join us on our Discord to discuss all things LangChain!\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.\n\n\n\n\n\n\n\n\n\n\n\nnext\nQuickstart Guide\n\n\n\n\n\n\n\n\n\n Contents\n \n\n\nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosystem\nAdditional Resources\n\n\n\n\n\n\n\n\n\nBy Harrison Chase\n\n\n\n\n \n © Copyright 2023, Harrison Chase.\n \n\n\n\n\n Last updated on Mar 24, 2023.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n', lookup_str='', metadata={'source': 'https://python.langchain.com/en/stable/', 'loc': 'https://python.langchain.com/en/stable/', 'lastmod': '2023-03-24T19:30:54.647430+00:00', 'changefreq': 'weekly', 'priority': '1'}, lookup_index=0)Filtering sitemap URLs​Sitemaps can be massive files, with thousands of URLs. Often you don't need every single one of them. You can filter the URLs by passing a list of strings or regex patterns to the url_filter parameter. Only URLs that match one of the patterns will be loaded.loader = SitemapLoader( "https://langchain.readthedocs.io/sitemap.xml", | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-11 | "https://langchain.readthedocs.io/sitemap.xml", filter_urls=["https://python.langchain.com/en/latest/"],)documents = loader.load()documents[0] Document(page_content='\n\n\n\n\n\nWelcome to LangChain — 🦜🔗 LangChain 0.0.123\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSkip to main content\n\n\n\n\n\n\n\n\n\n\nCtrl+K\n\n\n\n\n\n\n\n\n\n\n\n\n🦜🔗 LangChain 0.0.123\n\n\n\nGetting Started\n\nQuickstart Guide\n\nModules\n\nModels\nLLMs\nGetting Started\nGeneric Functionality\nHow to use the async API for LLMs\nHow to write a custom LLM wrapper\nHow (and why) to use the fake LLM\nHow to cache LLM calls\nHow to serialize LLM classes\nHow to stream LLM responses\nHow to track token usage\n\n\nIntegrations\nAI21\nAleph Alpha\nAnthropic\nAzure OpenAI LLM Example\nBanana\nCerebriumAI LLM Example\nCohere\nDeepInfra LLM Example\nForefrontAI LLM Example\nGooseAI LLM Example\nHugging Face Hub\nManifest\nModal\nOpenAI\nPetals LLM Example\nPromptLayer OpenAI\nSageMakerEndpoint\nSelf-Hosted Models via Runhouse\nStochasticAI\nWriter\n\n\nReference\n\n\nChat Models\nGetting Started\nHow-To Guides\nHow to use few shot examples\nHow to stream | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-12 | Models\nGetting Started\nHow-To Guides\nHow to use few shot examples\nHow to stream responses\n\n\nIntegrations\nAzure\nOpenAI\nPromptLayer ChatOpenAI\n\n\n\n\nText Embedding Models\nAzureOpenAI\nCohere\nFake Embeddings\nHugging Face Hub\nInstructEmbeddings\nOpenAI\nSageMaker Endpoint Embeddings\nSelf Hosted Embeddings\nTensorflowHub\n\n\n\n\nPrompts\nPrompt Templates\nGetting Started\nHow-To Guides\nHow to create a custom prompt template\nHow to create a prompt template that uses few shot examples\nHow to work with partial Prompt Templates\nHow to serialize prompts\n\n\nReference\nPromptTemplates\nExample Selector\n\n\n\n\nChat Prompt Template\nExample Selectors\nHow to create a custom example selector\nLengthBased ExampleSelector\nMaximal Marginal Relevance ExampleSelector\nNGram Overlap ExampleSelector\nSimilarity ExampleSelector\n\n\nOutput Parsers\nOutput Parsers\nCommaSeparatedListOutputParser\nOutputFixingParser\nPydanticOutputParser\nRetryOutputParser\nStructured Output Parser\n\n\n\n\nIndexes\nGetting Started\nDocument Loaders\nCoNLL-U\nAirbyte JSON\nAZLyrics\nBlackboard\nCollege Confidential\nCopy Paste\nCSV Loader\nDirectory Loader\nEmail\nEverNote\nFacebook Chat\nFigma\nGCS Directory\nGCS File Storage\nGitBook\nGoogle Drive\nGutenberg\nHacker News\nHTML\niFixit\nImages\nIMSDb\nMarkdown\nNotebook\nNotion\nObsidian\nPDF\nPowerPoint\nReadTheDocs Documentation\nRoam\ns3 Directory\ns3 File\nSubtitle Files\nTelegram\nUnstructured File Loader\nURL\nWeb Base\nWord Documents\nYouTube\n\n\nText Splitters\nGetting Started\nCharacter Text Splitter\nHuggingFace Length Function\nLatex Text Splitter\nMarkdown Text | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-13 | Text Splitter\nHuggingFace Length Function\nLatex Text Splitter\nMarkdown Text Splitter\nNLTK Text Splitter\nPython Code Text Splitter\nRecursiveCharacterTextSplitter\nSpacy Text Splitter\ntiktoken (OpenAI) Length Function\nTiktokenText Splitter\n\n\nVectorstores\nGetting Started\nAtlasDB\nChroma\nDeep Lake\nElasticSearch\nFAISS\nMilvus\nOpenSearch\nPGVector\nPinecone\nQdrant\nRedis\nWeaviate\n\n\nRetrievers\nChatGPT Plugin Retriever\nVectorStore Retriever\n\n\n\n\nMemory\nGetting Started\nHow-To Guides\nConversationBufferMemory\nConversationBufferWindowMemory\nEntity Memory\nConversation Knowledge Graph Memory\nConversationSummaryMemory\nConversationSummaryBufferMemory\nConversationTokenBufferMemory\nHow to add Memory to an LLMChain\nHow to add memory to a Multi-Input Chain\nHow to add Memory to an Agent\nHow to customize conversational memory\nHow to create a custom Memory class\nHow to use multiple memroy classes in the same chain\n\n\n\n\nChains\nGetting Started\nHow-To Guides\nAsync API for Chain\nLoading from LangChainHub\nLLM Chain\nSequential Chains\nSerialization\nTransformation Chain\nAnalyze Document\nChat Index\nGraph QA\nHypothetical Document Embeddings\nQuestion Answering with Sources\nQuestion Answering\nSummarization\nRetrieval Question/Answering\nRetrieval Question Answering with Sources\nVector DB Text Generation\nAPI Chains\nSelf-Critique Chain with Constitutional AI\nBashChain\nLLMCheckerChain\nLLM Math\nLLMRequestsChain\nLLMSummarizationCheckerChain\nModeration\nPAL\nSQLite example\n\n\nReference\n\n\nAgents\nGetting Started\nTools\nGetting Started\nDefining Custom Tools\nMulti Input Tools\nBash\nBing | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-14 | Started\nDefining Custom Tools\nMulti Input Tools\nBash\nBing Search\nChatGPT Plugins\nGoogle Search\nGoogle Serper API\nHuman as a tool\nIFTTT WebHooks\nPython REPL\nRequests\nSearch Tools\nSearxNG Search API\nSerpAPI\nWolfram Alpha\nZapier Natural Language Actions API\n\n\nAgents\nAgent Types\nCustom Agent\nConversation Agent (for Chat Models)\nConversation Agent\nMRKL\nMRKL Chat\nReAct\nSelf Ask With Search\n\n\nToolkits\nCSV Agent\nJSON Agent\nOpenAPI Agent\nPandas Dataframe Agent\nPython Agent\nSQL Database Agent\nVectorstore Agent\n\n\nAgent Executors\nHow to combine agents and vectorstores\nHow to use the async API for Agents\nHow to create ChatGPT Clone\nHow to access intermediate steps\nHow to cap the max number of iterations\nHow to add SharedMemory to an Agent and its Tools\n\n\n\n\n\nUse Cases\n\nPersonal Assistants\nQuestion Answering over Docs\nChatbots\nQuerying Tabular Data\nInteracting with APIs\nSummarization\nExtraction\nEvaluation\nAgent Benchmarking: Search + Calculator\nAgent VectorDB Question Answering Benchmarking\nBenchmarking Template\nData Augmented Question Answering\nUsing Hugging Face Datasets\nLLM Math\nQuestion Answering Benchmarking: Paul Graham Essay\nQuestion Answering Benchmarking: State of the Union Address\nQA Generation\nQuestion Answering\nSQL Question Answering Benchmarking: Chinook\n\n\n\nReference\n\nInstallation\nIntegrations\nAPI References\nPrompts\nPromptTemplates\nExample Selector\n\n\nUtilities\nPython REPL\nSerpAPI\nSearxNG Search\nDocstore\nText Splitter\nEmbeddings\nVectorStores\n\n\nChains\nAgents\n\n\n\nEcosystem\n\nLangChain Ecosystem\nAI21 | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-15 | Ecosystem\nAI21 Labs\nAtlasDB\nBanana\nCerebriumAI\nChroma\nCohere\nDeepInfra\nDeep Lake\nForefrontAI\nGoogle Search Wrapper\nGoogle Serper Wrapper\nGooseAI\nGraphsignal\nHazy Research\nHelicone\nHugging Face\nMilvus\nModal\nNLPCloud\nOpenAI\nOpenSearch\nPetals\nPGVector\nPinecone\nPromptLayer\nQdrant\nRunhouse\nSearxNG Search API\nSerpAPI\nStochasticAI\nUnstructured\nWeights & Biases\nWeaviate\nWolfram Alpha Wrapper\nWriter\n\n\n\nAdditional Resources\n\nLangChainHub\nGlossary\nLangChain Gallery\nDeployments\nTracing\nDiscord\nProduction Support\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n.rst\n\n\n\n\n\n\n\n.pdf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWelcome to LangChain\n\n\n\n\n Contents \n\n\n\nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosystem\nAdditional Resources\n\n\n\n\n\n\n\n\nWelcome to LangChain#\nLangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\n\nBe data-aware: connect a language model to other sources of data\nBe agentic: allow a language model to interact with its environment\n\nThe LangChain framework is designed with the above principles in mind.\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-16 | documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\n\nGetting Started#\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\n\nGetting Started Documentation\n\n\n\n\n\nModules#\nThere are several main modules that LangChain provides support for.\nFor each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.\nThese modules are, in increasing order of complexity:\n\nModels: The various model types and model integrations LangChain supports.\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\n\n\n\n\n\nUse Cases#\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\n\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\nQuestion | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-17 | Personal assistants need to take actions, remember interactions, and have knowledge about your data.\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\nExtraction: Extract structured information from text.\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\n\n\n\n\n\nReference Docs#\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\n\nReference Documentation\n\n\n\n\n\nLangChain Ecosystem#\nGuides for how other companies/products can be used with LangChain\n\nLangChain Ecosystem\n\n\n\n\n\nAdditional Resources#\nAdditional collection of resources we think may be useful as you develop your application!\n\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-18 | in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\nDiscord: Join us on our Discord to discuss all things LangChain!\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.\n\n\n\n\n\n\n\n\n\n\n\nnext\nQuickstart Guide\n\n\n\n\n\n\n\n\n\n Contents\n \n\n\nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosystem\nAdditional Resources\n\n\n\n\n\n\n\n\n\nBy Harrison Chase\n\n\n\n\n \n © Copyright 2023, Harrison Chase.\n \n\n\n\n\n Last updated on Mar 27, 2023.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n', lookup_str='', metadata={'source': 'https://python.langchain.com/en/latest/', 'loc': 'https://python.langchain.com/en/latest/', 'lastmod': '2023-03-27T22:50:49.790324+00:00', 'changefreq': 'daily', 'priority': '0.9'}, lookup_index=0)Add custom scraping rules​The SitemapLoader uses beautifulsoup4 for the scraping | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
576a1ad18b8d-19 | custom scraping rules​The SitemapLoader uses beautifulsoup4 for the scraping process, and it scrapes every element on the page by default. The SitemapLoader constructor accepts a custom scraping function. This feature can be helpful to tailor the scraping process to your specific needs; for example, you might want to avoid scraping headers or navigation elements. The following example shows how to develop and use a custom function to avoid navigation and header elements.Import the beautifulsoup4 library and define the custom function.pip install beautifulsoup4from bs4 import BeautifulSoupdef remove_nav_and_header_elements(content: BeautifulSoup) -> str: # Find all 'nav' and 'header' elements in the BeautifulSoup object nav_elements = content.find_all("nav") header_elements = content.find_all("header") # Remove each 'nav' and 'header' element from the BeautifulSoup object for element in nav_elements + header_elements: element.decompose() return str(content.get_text())Add your custom function to the SitemapLoader object.loader = SitemapLoader( "https://langchain.readthedocs.io/sitemap.xml", filter_urls=["https://python.langchain.com/en/latest/"], parsing_function=remove_nav_and_header_elements,)Local Sitemap​The sitemap loader can also be used to load local files.sitemap_loader = SitemapLoader(web_path="example_data/sitemap.xml", is_local=True)docs = sitemap_loader.load() Fetching pages: 100%|####################################################################################################################################| 3/3 [00:00<00:00, 3.91it/s]PreviousRSTNextSlackFiltering sitemap URLsAdd custom scraping rulesLocal SitemapCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/sitemap |
eeada402668e-0 | DuckDB | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/duckdb |
eeada402668e-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersDuckDBOn this pageDuckDBDuckDB is an in-process SQL OLAP database management system.Load a DuckDB query with one document per row.#!pip | https://python.langchain.com/docs/integrations/document_loaders/duckdb |
eeada402668e-2 | SQL OLAP database management system.Load a DuckDB query with one document per row.#!pip install duckdbfrom langchain.document_loaders import DuckDBLoaderTeam,PayrollNationals,81.34Reds,82.20 Writing example.csvloader = DuckDBLoader("SELECT * FROM read_csv_auto('example.csv')")data = loader.load()print(data) [Document(page_content='Team: Nationals\nPayroll: 81.34', metadata={}), Document(page_content='Team: Reds\nPayroll: 82.2', metadata={})]Specifying Which Columns are Content vs Metadata​loader = DuckDBLoader( "SELECT * FROM read_csv_auto('example.csv')", page_content_columns=["Team"], metadata_columns=["Payroll"],)data = loader.load()print(data) [Document(page_content='Team: Nationals', metadata={'Payroll': 81.34}), Document(page_content='Team: Reds', metadata={'Payroll': 82.2})]Adding Source to Metadata​loader = DuckDBLoader( "SELECT Team, Payroll, Team As source FROM read_csv_auto('example.csv')", metadata_columns=["source"],)data = loader.load()print(data) [Document(page_content='Team: Nationals\nPayroll: 81.34\nsource: Nationals', metadata={'source': 'Nationals'}), Document(page_content='Team: Reds\nPayroll: 82.2\nsource: Reds', metadata={'source': 'Reds'})]PreviousDocugamiNextEmailSpecifying Which Columns are Content vs MetadataAdding Source to MetadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/duckdb |
Subsets and Splits