url
stringlengths
34
116
markdown
stringlengths
0
150k
screenshotUrl
null
crawl
dict
metadata
dict
text
stringlengths
0
147k
https://python.langchain.com/docs/integrations/document_loaders/spreedly/
## Spreedly > [Spreedly](https://docs.spreedly.com/) is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at `Spreedly`, allowing you to independently store a card and then pass that card to different end points based on your business requirements. This notebook covers how to load data from the [Spreedly REST API](https://docs.spreedly.com/reference/api/v1/) into a format that can be ingested into LangChain, along with example usage for vectorization. Note: this notebook assumes the following packages are installed: `openai`, `chromadb`, and `tiktoken`. ``` import osfrom langchain.indexes import VectorstoreIndexCreatorfrom langchain_community.document_loaders import SpreedlyLoader ``` Spreedly API requires an access token, which can be found inside the Spreedly Admin Console. This document loader does not currently support pagination, nor access to more complex objects which require additional parameters. It also requires a `resource` option which defines what objects you want to load. Following resources are available: - `gateways_options`: [Documentation](https://docs.spreedly.com/reference/api/v1/#list-supported-gateways) - `gateways`: [Documentation](https://docs.spreedly.com/reference/api/v1/#list-created-gateways) - `receivers_options`: [Documentation](https://docs.spreedly.com/reference/api/v1/#list-supported-receivers) - `receivers`: [Documentation](https://docs.spreedly.com/reference/api/v1/#list-created-receivers) - `payment_methods`: [Documentation](https://docs.spreedly.com/reference/api/v1/#list) - `certificates`: [Documentation](https://docs.spreedly.com/reference/api/v1/#list-certificates) - `transactions`: [Documentation](https://docs.spreedly.com/reference/api/v1/#list49) - `environments`: [Documentation](https://docs.spreedly.com/reference/api/v1/#list-environments) ``` spreedly_loader = SpreedlyLoader( os.environ["SPREEDLY_ACCESS_TOKEN"], "gateways_options") ``` ``` # Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([spreedly_loader])spreedly_doc_retriever = index.vectorstore.as_retriever() ``` ``` Using embedded DuckDB without persistence: data will be transient ``` ``` # Test the retrieverspreedly_doc_retriever.get_relevant_documents("CRC") ``` ``` [Document(page_content='installment_grace_period_duration\nreference_data_code\ninvoice_number\ntax_management_indicator\noriginal_amount\ninvoice_amount\nvat_tax_rate\nmobile_remote_payment_type\ngratuity_amount\nmdd_field_1\nmdd_field_2\nmdd_field_3\nmdd_field_4\nmdd_field_5\nmdd_field_6\nmdd_field_7\nmdd_field_8\nmdd_field_9\nmdd_field_10\nmdd_field_11\nmdd_field_12\nmdd_field_13\nmdd_field_14\nmdd_field_15\nmdd_field_16\nmdd_field_17\nmdd_field_18\nmdd_field_19\nmdd_field_20\nsupported_countries: US\nAE\nBR\nCA\nCN\nDK\nFI\nFR\nDE\nIN\nJP\nMX\nNO\nSE\nGB\nSG\nLB\nPK\nsupported_cardtypes: visa\nmaster\namerican_express\ndiscover\ndiners_club\njcb\ndankort\nmaestro\nelo\nregions: asia_pacific\neurope\nlatin_america\nnorth_america\nhomepage: http://www.cybersource.com\ndisplay_api_url: https://ics2wsa.ic3.com/commerce/1.x/transactionProcessor\ncompany_name: CyberSource', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}), Document(page_content='BG\nBH\nBI\nBJ\nBM\nBN\nBO\nBR\nBS\nBT\nBW\nBY\nBZ\nCA\nCC\nCF\nCH\nCK\nCL\nCM\nCN\nCO\nCR\nCV\nCX\nCY\nCZ\nDE\nDJ\nDK\nDO\nDZ\nEC\nEE\nEG\nEH\nES\nET\nFI\nFJ\nFK\nFM\nFO\nFR\nGA\nGB\nGD\nGE\nGF\nGG\nGH\nGI\nGL\nGM\nGN\nGP\nGQ\nGR\nGT\nGU\nGW\nGY\nHK\nHM\nHN\nHR\nHT\nHU\nID\nIE\nIL\nIM\nIN\nIO\nIS\nIT\nJE\nJM\nJO\nJP\nKE\nKG\nKH\nKI\nKM\nKN\nKR\nKW\nKY\nKZ\nLA\nLC\nLI\nLK\nLS\nLT\nLU\nLV\nMA\nMC\nMD\nME\nMG\nMH\nMK\nML\nMN\nMO\nMP\nMQ\nMR\nMS\nMT\nMU\nMV\nMW\nMX\nMY\nMZ\nNA\nNC\nNE\nNF\nNG\nNI\nNL\nNO\nNP\nNR\nNU\nNZ\nOM\nPA\nPE\nPF\nPH\nPK\nPL\nPN\nPR\nPT\nPW\nPY\nQA\nRE\nRO\nRS\nRU\nRW\nSA\nSB\nSC\nSE\nSG\nSI\nSK\nSL\nSM\nSN\nST\nSV\nSZ\nTC\nTD\nTF\nTG\nTH\nTJ\nTK\nTM\nTO\nTR\nTT\nTV\nTW\nTZ\nUA\nUG\nUS\nUY\nUZ\nVA\nVC\nVE\nVI\nVN\nVU\nWF\nWS\nYE\nYT\nZA\nZM\nsupported_cardtypes: visa\nmaster\namerican_express\ndiscover\njcb\nmaestro\nelo\nnaranja\ncabal\nunionpay\nregions: asia_pacific\neurope\nmiddle_east\nnorth_america\nhomepage: http://worldpay.com\ndisplay_api_url: https://secure.worldpay.com/jsp/merchant/xml/paymentService.jsp\ncompany_name: WorldPay', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}), Document(page_content='gateway_specific_fields: receipt_email\nradar_session_id\nskip_radar_rules\napplication_fee\nstripe_account\nmetadata\nidempotency_key\nreason\nrefund_application_fee\nrefund_fee_amount\nreverse_transfer\naccount_id\ncustomer_id\nvalidate\nmake_default\ncancellation_reason\ncapture_method\nconfirm\nconfirmation_method\ncustomer\ndescription\nmoto\noff_session\non_behalf_of\npayment_method_types\nreturn_email\nreturn_url\nsave_payment_method\nsetup_future_usage\nstatement_descriptor\nstatement_descriptor_suffix\ntransfer_amount\ntransfer_destination\ntransfer_group\napplication_fee_amount\nrequest_three_d_secure\nerror_on_requires_action\nnetwork_transaction_id\nclaim_without_transaction_id\nfulfillment_date\nevent_type\nmodal_challenge\nidempotent_request\nmerchant_reference\ncustomer_reference\nshipping_address_zip\nshipping_from_zip\nshipping_amount\nline_items\nsupported_countries: AE\nAT\nAU\nBE\nBG\nBR\nCA\nCH\nCY\nCZ\nDE\nDK\nEE\nES\nFI\nFR\nGB\nGR\nHK\nHU\nIE\nIN\nIT\nJP\nLT\nLU\nLV\nMT\nMX\nMY\nNL\nNO\nNZ\nPL\nPT\nRO\nSE\nSG\nSI\nSK\nUS\nsupported_cardtypes: visa', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}), Document(page_content='mdd_field_57\nmdd_field_58\nmdd_field_59\nmdd_field_60\nmdd_field_61\nmdd_field_62\nmdd_field_63\nmdd_field_64\nmdd_field_65\nmdd_field_66\nmdd_field_67\nmdd_field_68\nmdd_field_69\nmdd_field_70\nmdd_field_71\nmdd_field_72\nmdd_field_73\nmdd_field_74\nmdd_field_75\nmdd_field_76\nmdd_field_77\nmdd_field_78\nmdd_field_79\nmdd_field_80\nmdd_field_81\nmdd_field_82\nmdd_field_83\nmdd_field_84\nmdd_field_85\nmdd_field_86\nmdd_field_87\nmdd_field_88\nmdd_field_89\nmdd_field_90\nmdd_field_91\nmdd_field_92\nmdd_field_93\nmdd_field_94\nmdd_field_95\nmdd_field_96\nmdd_field_97\nmdd_field_98\nmdd_field_99\nmdd_field_100\nsupported_countries: US\nAE\nBR\nCA\nCN\nDK\nFI\nFR\nDE\nIN\nJP\nMX\nNO\nSE\nGB\nSG\nLB\nPK\nsupported_cardtypes: visa\nmaster\namerican_express\ndiscover\ndiners_club\njcb\nmaestro\nelo\nunion_pay\ncartes_bancaires\nmada\nregions: asia_pacific\neurope\nlatin_america\nnorth_america\nhomepage: http://www.cybersource.com\ndisplay_api_url: https://api.cybersource.com\ncompany_name: CyberSource REST', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'})] ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:39.161Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/spreedly/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/spreedly/", "description": "Spreedly is a service that allows you to", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4402", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"spreedly\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:38 GMT", "etag": "W/\"69b88da362ea41c56be3935d8f82d6e4\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::qqqbm-1713753578204-f99fa7d2d048" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/spreedly/", "property": "og:url" }, { "content": "Spreedly | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Spreedly is a service that allows you to", "property": "og:description" } ], "title": "Spreedly | 🦜️🔗 LangChain" }
Spreedly Spreedly is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at Spreedly, allowing you to independently store a card and then pass that card to different end points based on your business requirements. This notebook covers how to load data from the Spreedly REST API into a format that can be ingested into LangChain, along with example usage for vectorization. Note: this notebook assumes the following packages are installed: openai, chromadb, and tiktoken. import os from langchain.indexes import VectorstoreIndexCreator from langchain_community.document_loaders import SpreedlyLoader Spreedly API requires an access token, which can be found inside the Spreedly Admin Console. This document loader does not currently support pagination, nor access to more complex objects which require additional parameters. It also requires a resource option which defines what objects you want to load. Following resources are available: - gateways_options: Documentation - gateways: Documentation - receivers_options: Documentation - receivers: Documentation - payment_methods: Documentation - certificates: Documentation - transactions: Documentation - environments: Documentation spreedly_loader = SpreedlyLoader( os.environ["SPREEDLY_ACCESS_TOKEN"], "gateways_options" ) # Create a vectorstore retriever from the loader # see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more details index = VectorstoreIndexCreator().from_loaders([spreedly_loader]) spreedly_doc_retriever = index.vectorstore.as_retriever() Using embedded DuckDB without persistence: data will be transient # Test the retriever spreedly_doc_retriever.get_relevant_documents("CRC") [Document(page_content='installment_grace_period_duration\nreference_data_code\ninvoice_number\ntax_management_indicator\noriginal_amount\ninvoice_amount\nvat_tax_rate\nmobile_remote_payment_type\ngratuity_amount\nmdd_field_1\nmdd_field_2\nmdd_field_3\nmdd_field_4\nmdd_field_5\nmdd_field_6\nmdd_field_7\nmdd_field_8\nmdd_field_9\nmdd_field_10\nmdd_field_11\nmdd_field_12\nmdd_field_13\nmdd_field_14\nmdd_field_15\nmdd_field_16\nmdd_field_17\nmdd_field_18\nmdd_field_19\nmdd_field_20\nsupported_countries: US\nAE\nBR\nCA\nCN\nDK\nFI\nFR\nDE\nIN\nJP\nMX\nNO\nSE\nGB\nSG\nLB\nPK\nsupported_cardtypes: visa\nmaster\namerican_express\ndiscover\ndiners_club\njcb\ndankort\nmaestro\nelo\nregions: asia_pacific\neurope\nlatin_america\nnorth_america\nhomepage: http://www.cybersource.com\ndisplay_api_url: https://ics2wsa.ic3.com/commerce/1.x/transactionProcessor\ncompany_name: CyberSource', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}), Document(page_content='BG\nBH\nBI\nBJ\nBM\nBN\nBO\nBR\nBS\nBT\nBW\nBY\nBZ\nCA\nCC\nCF\nCH\nCK\nCL\nCM\nCN\nCO\nCR\nCV\nCX\nCY\nCZ\nDE\nDJ\nDK\nDO\nDZ\nEC\nEE\nEG\nEH\nES\nET\nFI\nFJ\nFK\nFM\nFO\nFR\nGA\nGB\nGD\nGE\nGF\nGG\nGH\nGI\nGL\nGM\nGN\nGP\nGQ\nGR\nGT\nGU\nGW\nGY\nHK\nHM\nHN\nHR\nHT\nHU\nID\nIE\nIL\nIM\nIN\nIO\nIS\nIT\nJE\nJM\nJO\nJP\nKE\nKG\nKH\nKI\nKM\nKN\nKR\nKW\nKY\nKZ\nLA\nLC\nLI\nLK\nLS\nLT\nLU\nLV\nMA\nMC\nMD\nME\nMG\nMH\nMK\nML\nMN\nMO\nMP\nMQ\nMR\nMS\nMT\nMU\nMV\nMW\nMX\nMY\nMZ\nNA\nNC\nNE\nNF\nNG\nNI\nNL\nNO\nNP\nNR\nNU\nNZ\nOM\nPA\nPE\nPF\nPH\nPK\nPL\nPN\nPR\nPT\nPW\nPY\nQA\nRE\nRO\nRS\nRU\nRW\nSA\nSB\nSC\nSE\nSG\nSI\nSK\nSL\nSM\nSN\nST\nSV\nSZ\nTC\nTD\nTF\nTG\nTH\nTJ\nTK\nTM\nTO\nTR\nTT\nTV\nTW\nTZ\nUA\nUG\nUS\nUY\nUZ\nVA\nVC\nVE\nVI\nVN\nVU\nWF\nWS\nYE\nYT\nZA\nZM\nsupported_cardtypes: visa\nmaster\namerican_express\ndiscover\njcb\nmaestro\nelo\nnaranja\ncabal\nunionpay\nregions: asia_pacific\neurope\nmiddle_east\nnorth_america\nhomepage: http://worldpay.com\ndisplay_api_url: https://secure.worldpay.com/jsp/merchant/xml/paymentService.jsp\ncompany_name: WorldPay', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}), Document(page_content='gateway_specific_fields: receipt_email\nradar_session_id\nskip_radar_rules\napplication_fee\nstripe_account\nmetadata\nidempotency_key\nreason\nrefund_application_fee\nrefund_fee_amount\nreverse_transfer\naccount_id\ncustomer_id\nvalidate\nmake_default\ncancellation_reason\ncapture_method\nconfirm\nconfirmation_method\ncustomer\ndescription\nmoto\noff_session\non_behalf_of\npayment_method_types\nreturn_email\nreturn_url\nsave_payment_method\nsetup_future_usage\nstatement_descriptor\nstatement_descriptor_suffix\ntransfer_amount\ntransfer_destination\ntransfer_group\napplication_fee_amount\nrequest_three_d_secure\nerror_on_requires_action\nnetwork_transaction_id\nclaim_without_transaction_id\nfulfillment_date\nevent_type\nmodal_challenge\nidempotent_request\nmerchant_reference\ncustomer_reference\nshipping_address_zip\nshipping_from_zip\nshipping_amount\nline_items\nsupported_countries: AE\nAT\nAU\nBE\nBG\nBR\nCA\nCH\nCY\nCZ\nDE\nDK\nEE\nES\nFI\nFR\nGB\nGR\nHK\nHU\nIE\nIN\nIT\nJP\nLT\nLU\nLV\nMT\nMX\nMY\nNL\nNO\nNZ\nPL\nPT\nRO\nSE\nSG\nSI\nSK\nUS\nsupported_cardtypes: visa', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}), Document(page_content='mdd_field_57\nmdd_field_58\nmdd_field_59\nmdd_field_60\nmdd_field_61\nmdd_field_62\nmdd_field_63\nmdd_field_64\nmdd_field_65\nmdd_field_66\nmdd_field_67\nmdd_field_68\nmdd_field_69\nmdd_field_70\nmdd_field_71\nmdd_field_72\nmdd_field_73\nmdd_field_74\nmdd_field_75\nmdd_field_76\nmdd_field_77\nmdd_field_78\nmdd_field_79\nmdd_field_80\nmdd_field_81\nmdd_field_82\nmdd_field_83\nmdd_field_84\nmdd_field_85\nmdd_field_86\nmdd_field_87\nmdd_field_88\nmdd_field_89\nmdd_field_90\nmdd_field_91\nmdd_field_92\nmdd_field_93\nmdd_field_94\nmdd_field_95\nmdd_field_96\nmdd_field_97\nmdd_field_98\nmdd_field_99\nmdd_field_100\nsupported_countries: US\nAE\nBR\nCA\nCN\nDK\nFI\nFR\nDE\nIN\nJP\nMX\nNO\nSE\nGB\nSG\nLB\nPK\nsupported_cardtypes: visa\nmaster\namerican_express\ndiscover\ndiners_club\njcb\nmaestro\nelo\nunion_pay\ncartes_bancaires\nmada\nregions: asia_pacific\neurope\nlatin_america\nnorth_america\nhomepage: http://www.cybersource.com\ndisplay_api_url: https://api.cybersource.com\ncompany_name: CyberSource REST', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'})]
https://python.langchain.com/docs/integrations/document_loaders/rst/
## RST > A [reStructured Text (RST)](https://en.wikipedia.org/wiki/ReStructuredText) file is a file format for textual data used primarily in the Python programming language community for technical documentation. ## `UnstructuredRSTLoader`[​](#unstructuredrstloader "Direct link to unstructuredrstloader") You can load data from RST files with `UnstructuredRSTLoader` using the following workflow. ``` from langchain_community.document_loaders import UnstructuredRSTLoader ``` ``` loader = UnstructuredRSTLoader(file_path="example_data/README.rst", mode="elements")docs = loader.load() ``` ``` page_content='Example Docs' metadata={'source': 'example_data/README.rst', 'filename': 'README.rst', 'file_directory': 'example_data', 'filetype': 'text/x-rst', 'page_number': 1, 'category': 'Title'} ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:39.595Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/rst/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/rst/", "description": "A [reStructured Text", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3475", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"rst\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:39 GMT", "etag": "W/\"da2adcb1f0f530ccf4594c0a87413a59\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::hpmlg-1713753579180-2b3d18d92725" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/rst/", "property": "og:url" }, { "content": "RST | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "A [reStructured Text", "property": "og:description" } ], "title": "RST | 🦜️🔗 LangChain" }
RST A reStructured Text (RST) file is a file format for textual data used primarily in the Python programming language community for technical documentation. UnstructuredRSTLoader​ You can load data from RST files with UnstructuredRSTLoader using the following workflow. from langchain_community.document_loaders import UnstructuredRSTLoader loader = UnstructuredRSTLoader(file_path="example_data/README.rst", mode="elements") docs = loader.load() page_content='Example Docs' metadata={'source': 'example_data/README.rst', 'filename': 'README.rst', 'file_directory': 'example_data', 'filetype': 'text/x-rst', 'page_number': 1, 'category': 'Title'} Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/surrealdb/
## SurrealDB > [SurrealDB](https://surrealdb.com/) is an end-to-end cloud-native database designed for modern applications, including web, mobile, serverless, Jamstack, backend, and traditional applications. With SurrealDB, you can simplify your database and API infrastructure, reduce development time, and build secure, performant apps quickly and cost-effectively. > > **Key features of SurrealDB include:** > > * **Reduces development time:** SurrealDB simplifies your database and API stack by removing the need for most server-side components, allowing you to build secure, performant apps faster and cheaper. > * **Real-time collaborative API backend service:** SurrealDB functions as both a database and an API backend service, enabling real-time collaboration. > * **Support for multiple querying languages:** SurrealDB supports SQL querying from client devices, GraphQL, ACID transactions, WebSocket connections, structured and unstructured data, graph querying, full-text indexing, and geospatial querying. > * **Granular access control:** SurrealDB provides row-level permissions-based access control, giving you the ability to manage data access with precision. > > View the [features](https://surrealdb.com/features), the latest [releases](https://surrealdb.com/releases), and [documentation](https://surrealdb.com/docs). This notebook shows how to use functionality related to the `SurrealDBLoader`. ## Overview[​](#overview "Direct link to Overview") The SurrealDB Document Loader returns a list of Langchain Documents from a SurrealDB database. The Document Loader takes the following optional parameters: * `dburl`: connection string to the websocket endpoint. default: `ws://localhost:8000/rpc` * `ns`: name of the namespace. default: `langchain` * `db`: name of the database. default: `database` * `table`: name of the table. default: `documents` * `db_user`: SurrealDB credentials if needed: db username. * `db_pass`: SurrealDB credentails if needed: db password. * `filter_criteria`: dictionary to construct the `WHERE` clause for filtering results from table. The output `Document` takes the following shape: ``` Document( page_content=<json encoded string containing the result document>, metadata={ 'id': <document id>, 'ns': <namespace name>, 'db': <database_name>, 'table': <table name>, ... <additional fields from metadata property of the document> }) ``` ## Setup[​](#setup "Direct link to Setup") Uncomment the below cells to install surrealdb and langchain. ``` # %pip install --upgrade --quiet surrealdb langchain langchain-community ``` ``` # add this import for running in jupyter notebookimport nest_asyncionest_asyncio.apply() ``` ``` import jsonfrom langchain_community.document_loaders.surrealdb import SurrealDBLoader ``` ``` loader = SurrealDBLoader( dburl="ws://localhost:8000/rpc", ns="langchain", db="database", table="documents", db_user="root", db_pass="root", filter_criteria={},)docs = loader.load()len(docs) ``` ``` doc = docs[-1]doc.metadata ``` ``` {'id': 'documents:zzz434sa584xl3b4ohvk', 'source': '../../modules/state_of_the_union.txt', 'ns': 'langchain', 'db': 'database', 'table': 'documents'} ``` ``` page_content = json.loads(doc.page_content) ``` ``` 'When we use taxpayer dollars to rebuild America – we are going to Buy American: buy American products to support American jobs. \n\nThe federal government spends about $600 Billion a year to keep the country safe and secure. \n\nThere’s been a law on the books for almost a century \nto make sure taxpayers’ dollars support American jobs and businesses. \n\nEvery Administration says they’ll do it, but we are actually doing it. \n\nWe will buy American to make sure everything from the deck of an aircraft carrier to the steel on highway guardrails are made in America. \n\nBut to compete for the best jobs of the future, we also need to level the playing field with China and other competitors. \n\nThat’s why it is so important to pass the Bipartisan Innovation Act sitting in Congress that will make record investments in emerging technologies and American manufacturing. \n\nLet me give you one example of why it’s so important to pass it.' ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:39.720Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/surrealdb/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/surrealdb/", "description": "SurrealDB is an end-to-end cloud-native", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3475", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"surrealdb\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:39 GMT", "etag": "W/\"8523a0e2b9b409180904f25b50333585\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::757mv-1713753579599-2cc6cfce22e8" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/surrealdb/", "property": "og:url" }, { "content": "SurrealDB | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "SurrealDB is an end-to-end cloud-native", "property": "og:description" } ], "title": "SurrealDB | 🦜️🔗 LangChain" }
SurrealDB SurrealDB is an end-to-end cloud-native database designed for modern applications, including web, mobile, serverless, Jamstack, backend, and traditional applications. With SurrealDB, you can simplify your database and API infrastructure, reduce development time, and build secure, performant apps quickly and cost-effectively. Key features of SurrealDB include: Reduces development time: SurrealDB simplifies your database and API stack by removing the need for most server-side components, allowing you to build secure, performant apps faster and cheaper. Real-time collaborative API backend service: SurrealDB functions as both a database and an API backend service, enabling real-time collaboration. Support for multiple querying languages: SurrealDB supports SQL querying from client devices, GraphQL, ACID transactions, WebSocket connections, structured and unstructured data, graph querying, full-text indexing, and geospatial querying. Granular access control: SurrealDB provides row-level permissions-based access control, giving you the ability to manage data access with precision. View the features, the latest releases, and documentation. This notebook shows how to use functionality related to the SurrealDBLoader. Overview​ The SurrealDB Document Loader returns a list of Langchain Documents from a SurrealDB database. The Document Loader takes the following optional parameters: dburl: connection string to the websocket endpoint. default: ws://localhost:8000/rpc ns: name of the namespace. default: langchain db: name of the database. default: database table: name of the table. default: documents db_user: SurrealDB credentials if needed: db username. db_pass: SurrealDB credentails if needed: db password. filter_criteria: dictionary to construct the WHERE clause for filtering results from table. The output Document takes the following shape: Document( page_content=<json encoded string containing the result document>, metadata={ 'id': <document id>, 'ns': <namespace name>, 'db': <database_name>, 'table': <table name>, ... <additional fields from metadata property of the document> } ) Setup​ Uncomment the below cells to install surrealdb and langchain. # %pip install --upgrade --quiet surrealdb langchain langchain-community # add this import for running in jupyter notebook import nest_asyncio nest_asyncio.apply() import json from langchain_community.document_loaders.surrealdb import SurrealDBLoader loader = SurrealDBLoader( dburl="ws://localhost:8000/rpc", ns="langchain", db="database", table="documents", db_user="root", db_pass="root", filter_criteria={}, ) docs = loader.load() len(docs) doc = docs[-1] doc.metadata {'id': 'documents:zzz434sa584xl3b4ohvk', 'source': '../../modules/state_of_the_union.txt', 'ns': 'langchain', 'db': 'database', 'table': 'documents'} page_content = json.loads(doc.page_content) 'When we use taxpayer dollars to rebuild America – we are going to Buy American: buy American products to support American jobs. \n\nThe federal government spends about $600 Billion a year to keep the country safe and secure. \n\nThere’s been a law on the books for almost a century \nto make sure taxpayers’ dollars support American jobs and businesses. \n\nEvery Administration says they’ll do it, but we are actually doing it. \n\nWe will buy American to make sure everything from the deck of an aircraft carrier to the steel on highway guardrails are made in America. \n\nBut to compete for the best jobs of the future, we also need to level the playing field with China and other competitors. \n\nThat’s why it is so important to pass the Bipartisan Innovation Act sitting in Congress that will make record investments in emerging technologies and American manufacturing. \n\nLet me give you one example of why it’s so important to pass it.'
https://python.langchain.com/docs/integrations/document_loaders/telegram/
## Telegram > [Telegram Messenger](https://web.telegram.org/a/) is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features. This notebook covers how to load data from `Telegram` into a format that can be ingested into LangChain. ``` from langchain_community.document_loaders import ( TelegramChatApiLoader, TelegramChatFileLoader,) ``` ``` loader = TelegramChatFileLoader("example_data/telegram.json") ``` ``` [Document(page_content="Henry on 2020-01-01T00:00:02: It's 2020...\n\nHenry on 2020-01-01T00:00:04: Fireworks!\n\nGrace 🧤 ðŸ\x8d’ on 2020-01-01T00:00:05: You're a minute late!\n\n", metadata={'source': 'example_data/telegram.json'})] ``` `TelegramChatApiLoader` loads data directly from any specified chat from Telegram. In order to export the data, you will need to authenticate your Telegram account. You can get the API\_HASH and API\_ID from [https://my.telegram.org/auth?to=apps](https://my.telegram.org/auth?to=apps) chat\_entity – recommended to be the [entity](https://docs.telethon.dev/en/stable/concepts/entities.html?highlight=Entity#what-is-an-entity) of a channel. ``` loader = TelegramChatApiLoader( chat_entity="<CHAT_URL>", # recommended to use Entity here api_hash="<API HASH >", api_id="<API_ID>", username="", # needed only for caching the session.) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:40.064Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/telegram/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/telegram/", "description": "Telegram Messenger is a globally", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"telegram\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:39 GMT", "etag": "W/\"d0de89f340be21d07a5682f48c3dfc05\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::f4jgr-1713753579597-14827c345c61" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/telegram/", "property": "og:url" }, { "content": "Telegram | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Telegram Messenger is a globally", "property": "og:description" } ], "title": "Telegram | 🦜️🔗 LangChain" }
Telegram Telegram Messenger is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features. This notebook covers how to load data from Telegram into a format that can be ingested into LangChain. from langchain_community.document_loaders import ( TelegramChatApiLoader, TelegramChatFileLoader, ) loader = TelegramChatFileLoader("example_data/telegram.json") [Document(page_content="Henry on 2020-01-01T00:00:02: It's 2020...\n\nHenry on 2020-01-01T00:00:04: Fireworks!\n\nGrace 🧤 ðŸ\x8d’ on 2020-01-01T00:00:05: You're a minute late!\n\n", metadata={'source': 'example_data/telegram.json'})] TelegramChatApiLoader loads data directly from any specified chat from Telegram. In order to export the data, you will need to authenticate your Telegram account. You can get the API_HASH and API_ID from https://my.telegram.org/auth?to=apps chat_entity – recommended to be the entity of a channel. loader = TelegramChatApiLoader( chat_entity="<CHAT_URL>", # recommended to use Entity here api_hash="<API HASH >", api_id="<API_ID>", username="", # needed only for caching the session. )
https://python.langchain.com/docs/integrations/document_loaders/tensorflow_datasets/
## TensorFlow Datasets > [TensorFlow Datasets](https://www.tensorflow.org/datasets) is a collection of datasets ready to use, with TensorFlow or other Python ML frameworks, such as Jax. All datasets are exposed as [tf.data.Datasets](https://www.tensorflow.org/api_docs/python/tf/data/Dataset), enabling easy-to-use and high-performance input pipelines. To get started see the [guide](https://www.tensorflow.org/datasets/overview) and the [list of datasets](https://www.tensorflow.org/datasets/catalog/overview#all_datasets). This notebook shows how to load `TensorFlow Datasets` into a Document format that we can use downstream. ## Installation[​](#installation "Direct link to Installation") You need to install `tensorflow` and `tensorflow-datasets` python packages. ``` %pip install --upgrade --quiet tensorflow ``` ``` %pip install --upgrade --quiet tensorflow-datasets ``` ## Example[​](#example "Direct link to Example") As an example, we use the [`mlqa/en` dataset](https://www.tensorflow.org/datasets/catalog/mlqa#mlqaen). > `MLQA` (`Multilingual Question Answering Dataset`) is a benchmark dataset for evaluating multilingual question answering performance. The dataset consists of 7 languages: Arabic, German, Spanish, English, Hindi, Vietnamese, Chinese. > > * Homepage: [https://github.com/facebookresearch/MLQA](https://github.com/facebookresearch/MLQA) > * Source code: `tfds.datasets.mlqa.Builder` > * Download size: 72.21 MiB ``` # Feature structure of `mlqa/en` dataset:FeaturesDict( { "answers": Sequence( { "answer_start": int32, "text": Text(shape=(), dtype=string), } ), "context": Text(shape=(), dtype=string), "id": string, "question": Text(shape=(), dtype=string), "title": Text(shape=(), dtype=string), }) ``` ``` import tensorflow as tfimport tensorflow_datasets as tfds ``` ``` # try directly access this dataset:ds = tfds.load("mlqa/en", split="test")ds = ds.take(1) # Only take a single exampleds ``` ``` <_TakeDataset element_spec={'answers': {'answer_start': TensorSpec(shape=(None,), dtype=tf.int32, name=None), 'text': TensorSpec(shape=(None,), dtype=tf.string, name=None)}, 'context': TensorSpec(shape=(), dtype=tf.string, name=None), 'id': TensorSpec(shape=(), dtype=tf.string, name=None), 'question': TensorSpec(shape=(), dtype=tf.string, name=None), 'title': TensorSpec(shape=(), dtype=tf.string, name=None)}> ``` Now we have to create a custom function to convert dataset sample into a Document. This is a requirement. There is no standard format for the TF datasets that’s why we need to make a custom transformation function. Let’s use `context` field as the `Document.page_content` and place other fields in the `Document.metadata`. ``` from langchain_core.documents import Documentdef decode_to_str(item: tf.Tensor) -> str: return item.numpy().decode("utf-8")def mlqaen_example_to_document(example: dict) -> Document: return Document( page_content=decode_to_str(example["context"]), metadata={ "id": decode_to_str(example["id"]), "title": decode_to_str(example["title"]), "question": decode_to_str(example["question"]), "answer": decode_to_str(example["answers"]["text"][0]), }, )for example in ds: doc = mlqaen_example_to_document(example) print(doc) break ``` ``` page_content='After completing the journey around South America, on 23 February 2006, Queen Mary 2 met her namesake, the original RMS Queen Mary, which is permanently docked at Long Beach, California. Escorted by a flotilla of smaller ships, the two Queens exchanged a "whistle salute" which was heard throughout the city of Long Beach. Queen Mary 2 met the other serving Cunard liners Queen Victoria and Queen Elizabeth 2 on 13 January 2008 near the Statue of Liberty in New York City harbour, with a celebratory fireworks display; Queen Elizabeth 2 and Queen Victoria made a tandem crossing of the Atlantic for the meeting. This marked the first time three Cunard Queens have been present in the same location. Cunard stated this would be the last time these three ships would ever meet, due to Queen Elizabeth 2\'s impending retirement from service in late 2008. However this would prove not to be the case, as the three Queens met in Southampton on 22 April 2008. Queen Mary 2 rendezvoused with Queen Elizabeth 2 in Dubai on Saturday 21 March 2009, after the latter ship\'s retirement, while both ships were berthed at Port Rashid. With the withdrawal of Queen Elizabeth 2 from Cunard\'s fleet and its docking in Dubai, Queen Mary 2 became the only ocean liner left in active passenger service.' metadata={'id': '5116f7cccdbf614d60bcd23498274ffd7b1e4ec7', 'title': 'RMS Queen Mary 2', 'question': 'What year did Queen Mary 2 complete her journey around South America?', 'answer': '2006'} ``` ``` 2023-08-03 14:27:08.482983: W tensorflow/core/kernels/data/cache_dataset_ops.cc:854] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead. ``` ``` from langchain_community.document_loaders import TensorflowDatasetLoaderfrom langchain_core.documents import Documentloader = TensorflowDatasetLoader( dataset_name="mlqa/en", split_name="test", load_max_docs=3, sample_to_document_function=mlqaen_example_to_document,) ``` `TensorflowDatasetLoader` has these parameters: - `dataset_name`: the name of the dataset to load - `split_name`: the name of the split to load. Defaults to “train”. - `load_max_docs`: a limit to the number of loaded documents. Defaults to 100. - `sample_to_document_function`: a function that converts a dataset sample to a Document ``` docs = loader.load()len(docs) ``` ``` 2023-08-03 14:27:22.998964: W tensorflow/core/kernels/data/cache_dataset_ops.cc:854] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead. ``` ``` 'After completing the journey around South America, on 23 February 2006, Queen Mary 2 met her namesake, the original RMS Queen Mary, which is permanently docked at Long Beach, California. Escorted by a flotilla of smaller ships, the two Queens exchanged a "whistle salute" which was heard throughout the city of Long Beach. Queen Mary 2 met the other serving Cunard liners Queen Victoria and Queen Elizabeth 2 on 13 January 2008 near the Statue of Liberty in New York City harbour, with a celebratory fireworks display; Queen Elizabeth 2 and Queen Victoria made a tandem crossing of the Atlantic for the meeting. This marked the first time three Cunard Queens have been present in the same location. Cunard stated this would be the last time these three ships would ever meet, due to Queen Elizabeth 2\'s impending retirement from service in late 2008. However this would prove not to be the case, as the three Queens met in Southampton on 22 April 2008. Queen Mary 2 rendezvoused with Queen Elizabeth 2 in Dubai on Saturday 21 March 2009, after the latter ship\'s retirement, while both ships were berthed at Port Rashid. With the withdrawal of Queen Elizabeth 2 from Cunard\'s fleet and its docking in Dubai, Queen Mary 2 became the only ocean liner left in active passenger service.' ``` ``` {'id': '5116f7cccdbf614d60bcd23498274ffd7b1e4ec7', 'title': 'RMS Queen Mary 2', 'question': 'What year did Queen Mary 2 complete her journey around South America?', 'answer': '2006'} ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:40.411Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/tensorflow_datasets/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/tensorflow_datasets/", "description": "TensorFlow Datasets is a", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3475", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"tensorflow_datasets\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:40 GMT", "etag": "W/\"65229dcbd03f3088c441f1b9590c6d09\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::tl469-1713753580077-884a3c8ff0ba" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/tensorflow_datasets/", "property": "og:url" }, { "content": "TensorFlow Datasets | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "TensorFlow Datasets is a", "property": "og:description" } ], "title": "TensorFlow Datasets | 🦜️🔗 LangChain" }
TensorFlow Datasets TensorFlow Datasets is a collection of datasets ready to use, with TensorFlow or other Python ML frameworks, such as Jax. All datasets are exposed as tf.data.Datasets, enabling easy-to-use and high-performance input pipelines. To get started see the guide and the list of datasets. This notebook shows how to load TensorFlow Datasets into a Document format that we can use downstream. Installation​ You need to install tensorflow and tensorflow-datasets python packages. %pip install --upgrade --quiet tensorflow %pip install --upgrade --quiet tensorflow-datasets Example​ As an example, we use the mlqa/en dataset. MLQA (Multilingual Question Answering Dataset) is a benchmark dataset for evaluating multilingual question answering performance. The dataset consists of 7 languages: Arabic, German, Spanish, English, Hindi, Vietnamese, Chinese. Homepage: https://github.com/facebookresearch/MLQA Source code: tfds.datasets.mlqa.Builder Download size: 72.21 MiB # Feature structure of `mlqa/en` dataset: FeaturesDict( { "answers": Sequence( { "answer_start": int32, "text": Text(shape=(), dtype=string), } ), "context": Text(shape=(), dtype=string), "id": string, "question": Text(shape=(), dtype=string), "title": Text(shape=(), dtype=string), } ) import tensorflow as tf import tensorflow_datasets as tfds # try directly access this dataset: ds = tfds.load("mlqa/en", split="test") ds = ds.take(1) # Only take a single example ds <_TakeDataset element_spec={'answers': {'answer_start': TensorSpec(shape=(None,), dtype=tf.int32, name=None), 'text': TensorSpec(shape=(None,), dtype=tf.string, name=None)}, 'context': TensorSpec(shape=(), dtype=tf.string, name=None), 'id': TensorSpec(shape=(), dtype=tf.string, name=None), 'question': TensorSpec(shape=(), dtype=tf.string, name=None), 'title': TensorSpec(shape=(), dtype=tf.string, name=None)}> Now we have to create a custom function to convert dataset sample into a Document. This is a requirement. There is no standard format for the TF datasets that’s why we need to make a custom transformation function. Let’s use context field as the Document.page_content and place other fields in the Document.metadata. from langchain_core.documents import Document def decode_to_str(item: tf.Tensor) -> str: return item.numpy().decode("utf-8") def mlqaen_example_to_document(example: dict) -> Document: return Document( page_content=decode_to_str(example["context"]), metadata={ "id": decode_to_str(example["id"]), "title": decode_to_str(example["title"]), "question": decode_to_str(example["question"]), "answer": decode_to_str(example["answers"]["text"][0]), }, ) for example in ds: doc = mlqaen_example_to_document(example) print(doc) break page_content='After completing the journey around South America, on 23 February 2006, Queen Mary 2 met her namesake, the original RMS Queen Mary, which is permanently docked at Long Beach, California. Escorted by a flotilla of smaller ships, the two Queens exchanged a "whistle salute" which was heard throughout the city of Long Beach. Queen Mary 2 met the other serving Cunard liners Queen Victoria and Queen Elizabeth 2 on 13 January 2008 near the Statue of Liberty in New York City harbour, with a celebratory fireworks display; Queen Elizabeth 2 and Queen Victoria made a tandem crossing of the Atlantic for the meeting. This marked the first time three Cunard Queens have been present in the same location. Cunard stated this would be the last time these three ships would ever meet, due to Queen Elizabeth 2\'s impending retirement from service in late 2008. However this would prove not to be the case, as the three Queens met in Southampton on 22 April 2008. Queen Mary 2 rendezvoused with Queen Elizabeth 2 in Dubai on Saturday 21 March 2009, after the latter ship\'s retirement, while both ships were berthed at Port Rashid. With the withdrawal of Queen Elizabeth 2 from Cunard\'s fleet and its docking in Dubai, Queen Mary 2 became the only ocean liner left in active passenger service.' metadata={'id': '5116f7cccdbf614d60bcd23498274ffd7b1e4ec7', 'title': 'RMS Queen Mary 2', 'question': 'What year did Queen Mary 2 complete her journey around South America?', 'answer': '2006'} 2023-08-03 14:27:08.482983: W tensorflow/core/kernels/data/cache_dataset_ops.cc:854] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead. from langchain_community.document_loaders import TensorflowDatasetLoader from langchain_core.documents import Document loader = TensorflowDatasetLoader( dataset_name="mlqa/en", split_name="test", load_max_docs=3, sample_to_document_function=mlqaen_example_to_document, ) TensorflowDatasetLoader has these parameters: - dataset_name: the name of the dataset to load - split_name: the name of the split to load. Defaults to “train”. - load_max_docs: a limit to the number of loaded documents. Defaults to 100. - sample_to_document_function: a function that converts a dataset sample to a Document docs = loader.load() len(docs) 2023-08-03 14:27:22.998964: W tensorflow/core/kernels/data/cache_dataset_ops.cc:854] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead. 'After completing the journey around South America, on 23 February 2006, Queen Mary 2 met her namesake, the original RMS Queen Mary, which is permanently docked at Long Beach, California. Escorted by a flotilla of smaller ships, the two Queens exchanged a "whistle salute" which was heard throughout the city of Long Beach. Queen Mary 2 met the other serving Cunard liners Queen Victoria and Queen Elizabeth 2 on 13 January 2008 near the Statue of Liberty in New York City harbour, with a celebratory fireworks display; Queen Elizabeth 2 and Queen Victoria made a tandem crossing of the Atlantic for the meeting. This marked the first time three Cunard Queens have been present in the same location. Cunard stated this would be the last time these three ships would ever meet, due to Queen Elizabeth 2\'s impending retirement from service in late 2008. However this would prove not to be the case, as the three Queens met in Southampton on 22 April 2008. Queen Mary 2 rendezvoused with Queen Elizabeth 2 in Dubai on Saturday 21 March 2009, after the latter ship\'s retirement, while both ships were berthed at Port Rashid. With the withdrawal of Queen Elizabeth 2 from Cunard\'s fleet and its docking in Dubai, Queen Mary 2 became the only ocean liner left in active passenger service.' {'id': '5116f7cccdbf614d60bcd23498274ffd7b1e4ec7', 'title': 'RMS Queen Mary 2', 'question': 'What year did Queen Mary 2 complete her journey around South America?', 'answer': '2006'} Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/tencent_cos_file/
[Tencent Cloud Object Storage (COS)](https://www.tencentcloud.com/products/cos) is a distributed storage service that enables you to store any amount of data from anywhere via HTTP/HTTPS protocols. `COS` has no restrictions on data structure or format. It also has no bucket size limit and partition management, making it suitable for virtually any use case, such as data delivery, data processing, and data lakes. `COS` provides a web-based console, multi-language SDKs and APIs, command line tool, and graphical tools. It works well with Amazon S3 APIs, allowing you to quickly access community tools and plugins. This covers how to load document object from a `Tencent COS File`. ``` conf = CosConfig( Region="your cos region", SecretId="your cos secret_id", SecretKey="your cos secret_key",)loader = TencentCOSFileLoader(conf=conf, bucket="you_cos_bucket", key="fake.docx") ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:40.814Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/tencent_cos_file/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/tencent_cos_file/", "description": "[Tencent Cloud Object Storage", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"tencent_cos_file\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:40 GMT", "etag": "W/\"ab04ecc7e8ceea10f35cd4617e81c1f1\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::jv828-1713753580175-7266a051bac1" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/tencent_cos_file/", "property": "og:url" }, { "content": "Tencent COS File | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "[Tencent Cloud Object Storage", "property": "og:description" } ], "title": "Tencent COS File | 🦜️🔗 LangChain" }
Tencent Cloud Object Storage (COS) is a distributed storage service that enables you to store any amount of data from anywhere via HTTP/HTTPS protocols. COS has no restrictions on data structure or format. It also has no bucket size limit and partition management, making it suitable for virtually any use case, such as data delivery, data processing, and data lakes. COS provides a web-based console, multi-language SDKs and APIs, command line tool, and graphical tools. It works well with Amazon S3 APIs, allowing you to quickly access community tools and plugins. This covers how to load document object from a Tencent COS File. conf = CosConfig( Region="your cos region", SecretId="your cos secret_id", SecretKey="your cos secret_key", ) loader = TencentCOSFileLoader(conf=conf, bucket="you_cos_bucket", key="fake.docx")
https://python.langchain.com/docs/integrations/document_loaders/tidb/
[TiDB Cloud](https://tidbcloud.com/), is a comprehensive Database-as-a-Service (DBaaS) solution, that provides dedicated and serverless options. TiDB Serverless is now integrating a built-in vector search into the MySQL landscape. With this enhancement, you can seamlessly develop AI applications using TiDB Serverless without the need for a new database or additional technical stacks. Be among the first to experience it by joining the waitlist for the private beta at [https://tidb.cloud/ai](https://tidb.cloud/ai). This notebook introduces how to use `TiDBLoader` to load data from TiDB in langchain. Then, we will configure the connection to a TiDB. In this notebook, we will follow the standard connection method provided by TiDB Cloud to establish a secure and efficient database connection. ``` import getpass# copy from tidb cloud console,replace it with your owntidb_connection_string_template = "mysql+pymysql://<USER>:<PASSWORD>@<HOST>:4000/<DB>?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true"tidb_password = getpass.getpass("Input your TiDB password:")tidb_connection_string = tidb_connection_string_template.replace( "<PASSWORD>", tidb_password) ``` Here’s a breakdown of some key arguments you can use to customize the behavior of the `TiDBLoader`: ``` from sqlalchemy import Column, Integer, MetaData, String, Table, create_engine# Connect to the databaseengine = create_engine(tidb_connection_string)metadata = MetaData()table_name = "test_tidb_loader"# Create a tabletest_table = Table( table_name, metadata, Column("id", Integer, primary_key=True), Column("name", String(255)), Column("description", String(255)),)metadata.create_all(engine)with engine.connect() as connection: transaction = connection.begin() try: connection.execute( test_table.insert(), [ {"name": "Item 1", "description": "Description of Item 1"}, {"name": "Item 2", "description": "Description of Item 2"}, {"name": "Item 3", "description": "Description of Item 3"}, ], ) transaction.commit() except: transaction.rollback() raise ``` ``` from langchain_community.document_loaders import TiDBLoader# Setup TiDBLoader to retrieve dataloader = TiDBLoader( connection_string=tidb_connection_string, query=f"SELECT * FROM {table_name};", page_content_columns=["name", "description"], metadata_columns=["id"],)# Load datadocuments = loader.load()# Display the loaded documentsfor doc in documents: print("-" * 30) print(f"content: {doc.page_content}\nmetada: {doc.metadata}") ``` ``` ------------------------------content: name: Item 1description: Description of Item 1metada: {'id': 1}------------------------------content: name: Item 2description: Description of Item 2metada: {'id': 2}------------------------------content: name: Item 3description: Description of Item 3metada: {'id': 3} ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:40.915Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/tidb/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/tidb/", "description": "TiDB Cloud, is a comprehensive", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4402", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"tidb\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:40 GMT", "etag": "W/\"7a2d30a4e95717c9cf14d4749d0a240a\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::fmtb9-1713753580192-d13f964388db" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/tidb/", "property": "og:url" }, { "content": "TiDB | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "TiDB Cloud, is a comprehensive", "property": "og:description" } ], "title": "TiDB | 🦜️🔗 LangChain" }
TiDB Cloud, is a comprehensive Database-as-a-Service (DBaaS) solution, that provides dedicated and serverless options. TiDB Serverless is now integrating a built-in vector search into the MySQL landscape. With this enhancement, you can seamlessly develop AI applications using TiDB Serverless without the need for a new database or additional technical stacks. Be among the first to experience it by joining the waitlist for the private beta at https://tidb.cloud/ai. This notebook introduces how to use TiDBLoader to load data from TiDB in langchain. Then, we will configure the connection to a TiDB. In this notebook, we will follow the standard connection method provided by TiDB Cloud to establish a secure and efficient database connection. import getpass # copy from tidb cloud console,replace it with your own tidb_connection_string_template = "mysql+pymysql://<USER>:<PASSWORD>@<HOST>:4000/<DB>?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true" tidb_password = getpass.getpass("Input your TiDB password:") tidb_connection_string = tidb_connection_string_template.replace( "<PASSWORD>", tidb_password ) Here’s a breakdown of some key arguments you can use to customize the behavior of the TiDBLoader: from sqlalchemy import Column, Integer, MetaData, String, Table, create_engine # Connect to the database engine = create_engine(tidb_connection_string) metadata = MetaData() table_name = "test_tidb_loader" # Create a table test_table = Table( table_name, metadata, Column("id", Integer, primary_key=True), Column("name", String(255)), Column("description", String(255)), ) metadata.create_all(engine) with engine.connect() as connection: transaction = connection.begin() try: connection.execute( test_table.insert(), [ {"name": "Item 1", "description": "Description of Item 1"}, {"name": "Item 2", "description": "Description of Item 2"}, {"name": "Item 3", "description": "Description of Item 3"}, ], ) transaction.commit() except: transaction.rollback() raise from langchain_community.document_loaders import TiDBLoader # Setup TiDBLoader to retrieve data loader = TiDBLoader( connection_string=tidb_connection_string, query=f"SELECT * FROM {table_name};", page_content_columns=["name", "description"], metadata_columns=["id"], ) # Load data documents = loader.load() # Display the loaded documents for doc in documents: print("-" * 30) print(f"content: {doc.page_content}\nmetada: {doc.metadata}") ------------------------------ content: name: Item 1 description: Description of Item 1 metada: {'id': 1} ------------------------------ content: name: Item 2 description: Description of Item 2 metada: {'id': 2} ------------------------------ content: name: Item 3 description: Description of Item 3 metada: {'id': 3}
https://python.langchain.com/docs/integrations/document_loaders/tomarkdown/
``` **LangChain** is a framework for developing applications powered by language models. It enables applications that:- **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)- **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)This framework consists of several parts.- **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.- **[LangChain Templates](/docs/templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.- **[LangServe](/docs/langserve)**: A library for deploying LangChain chains as a REST API.- **[LangSmith](/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](https://python.langchain.com/assets/images/langchain_stack-f21828069f74484521f38199910007c1.svg)Together, these products simplify the entire application lifecycle:- **Develop**: Write your applications in LangChain/LangChain.js. Hit the ground running using Templates for reference.- **Productionize**: Use LangSmith to inspect, test and monitor your chains, so that you can constantly improve and deploy with confidence.- **Deploy**: Turn any chain into an API with LangServe.## LangChain Libraries [​](\#langchain-libraries "Direct link to LangChain Libraries")The main value props of the LangChain packages are:1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasksOff-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.The LangChain libraries themselves are made up of several different packages.- **`langchain-core`**: Base abstractions and LangChain Expression Language.- **`langchain-community`**: Third party integrations.- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.## Get started [​](\#get-started "Direct link to Get started")[Here’s](/docs/get_started/installation) how to install LangChain, set up your environment, and start building.We recommend following our [Quickstart](/docs/get_started/quickstart) guide to familiarize yourself with the framework by building your first LangChain application.Read up on our [Security](/docs/security) best practices to make sure you're developing safely with LangChain.noteThese docs focus on the Python LangChain library. [Head here](https://js.langchain.com) for docs on the JavaScript LangChain library.## LangChain Expression Language (LCEL) [​](\#langchain-expression-language-lcel "Direct link to LangChain Expression Language (LCEL)")LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.- **[Overview](/docs/expression_language/)**: LCEL and its benefits- **[Interface](/docs/expression_language/interface)**: The standard interface for LCEL objects- **[How-to](/docs/expression_language/how_to)**: Key features of LCEL- **[Cookbook](/docs/expression_language/cookbook)**: Example code for accomplishing common tasks## Modules [​](\#modules "Direct link to Modules")LangChain provides standard, extendable interfaces and integrations for the following modules:#### [Model I/O](/docs/modules/model_io/) [​](\#model-io "Direct link to model-io")Interface with language models#### [Retrieval](/docs/modules/data_connection/) [​](\#retrieval "Direct link to retrieval")Interface with application-specific data#### [Agents](/docs/modules/agents/) [​](\#agents "Direct link to agents")Let models choose which tools to use given high-level directives## Examples, ecosystem, and resources [​](\#examples-ecosystem-and-resources "Direct link to Examples, ecosystem, and resources")### [Use cases](/docs/use_cases/question_answering/) [​](\#use-cases "Direct link to use-cases")Walkthroughs and techniques for common end-to-end use cases, like:- [Document question answering](/docs/use_cases/question_answering/)- [Chatbots](/docs/use_cases/chatbots/)- [Analyzing structured data](/docs/use_cases/sql/)- and much more...### [Integrations](/docs/integrations/providers/) [​](\#integrations "Direct link to integrations")LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/providers/).### [Guides](/docs/guides/debugging) [​](\#guides "Direct link to guides")Best practices for developing with LangChain.### [API reference](https://api.python.langchain.com) [​](\#api-reference "Direct link to api-reference")Head to the reference section for full documentation of all classes and methods in the LangChain and LangChain Experimental Python packages.### [Developer's guide](/docs/contributing) [​](\#developers-guide "Direct link to developers-guide")Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.Head to the [Community navigator](/docs/community) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLM’s. ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:41.461Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/tomarkdown/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/tomarkdown/", "description": "2markdown service transforms website content", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3475", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"tomarkdown\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:40 GMT", "etag": "W/\"7d71a25a2c13cf1f0f7add8489fd2891\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::r7j5h-1713753580915-1468fc667699" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/tomarkdown/", "property": "og:url" }, { "content": "2Markdown | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "2markdown service transforms website content", "property": "og:description" } ], "title": "2Markdown | 🦜️🔗 LangChain" }
**LangChain** is a framework for developing applications powered by language models. It enables applications that: - **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.) - **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.) This framework consists of several parts. - **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents. - **[LangChain Templates](/docs/templates)**: A collection of easily deployable reference architectures for a wide variety of tasks. - **[LangServe](/docs/langserve)**: A library for deploying LangChain chains as a REST API. - **[LangSmith](/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain. ![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](https://python.langchain.com/assets/images/langchain_stack-f21828069f74484521f38199910007c1.svg) Together, these products simplify the entire application lifecycle: - **Develop**: Write your applications in LangChain/LangChain.js. Hit the ground running using Templates for reference. - **Productionize**: Use LangSmith to inspect, test and monitor your chains, so that you can constantly improve and deploy with confidence. - **Deploy**: Turn any chain into an API with LangServe. ## LangChain Libraries [​](\#langchain-libraries "Direct link to LangChain Libraries") The main value props of the LangChain packages are: 1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not 2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones. The LangChain libraries themselves are made up of several different packages. - **`langchain-core`**: Base abstractions and LangChain Expression Language. - **`langchain-community`**: Third party integrations. - **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture. ## Get started [​](\#get-started "Direct link to Get started") [Here’s](/docs/get_started/installation) how to install LangChain, set up your environment, and start building. We recommend following our [Quickstart](/docs/get_started/quickstart) guide to familiarize yourself with the framework by building your first LangChain application. Read up on our [Security](/docs/security) best practices to make sure you're developing safely with LangChain. note These docs focus on the Python LangChain library. [Head here](https://js.langchain.com) for docs on the JavaScript LangChain library. ## LangChain Expression Language (LCEL) [​](\#langchain-expression-language-lcel "Direct link to LangChain Expression Language (LCEL)") LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains. - **[Overview](/docs/expression_language/)**: LCEL and its benefits - **[Interface](/docs/expression_language/interface)**: The standard interface for LCEL objects - **[How-to](/docs/expression_language/how_to)**: Key features of LCEL - **[Cookbook](/docs/expression_language/cookbook)**: Example code for accomplishing common tasks ## Modules [​](\#modules "Direct link to Modules") LangChain provides standard, extendable interfaces and integrations for the following modules: #### [Model I/O](/docs/modules/model_io/) [​](\#model-io "Direct link to model-io") Interface with language models #### [Retrieval](/docs/modules/data_connection/) [​](\#retrieval "Direct link to retrieval") Interface with application-specific data #### [Agents](/docs/modules/agents/) [​](\#agents "Direct link to agents") Let models choose which tools to use given high-level directives ## Examples, ecosystem, and resources [​](\#examples-ecosystem-and-resources "Direct link to Examples, ecosystem, and resources") ### [Use cases](/docs/use_cases/question_answering/) [​](\#use-cases "Direct link to use-cases") Walkthroughs and techniques for common end-to-end use cases, like: - [Document question answering](/docs/use_cases/question_answering/) - [Chatbots](/docs/use_cases/chatbots/) - [Analyzing structured data](/docs/use_cases/sql/) - and much more... ### [Integrations](/docs/integrations/providers/) [​](\#integrations "Direct link to integrations") LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/providers/). ### [Guides](/docs/guides/debugging) [​](\#guides "Direct link to guides") Best practices for developing with LangChain. ### [API reference](https://api.python.langchain.com) [​](\#api-reference "Direct link to api-reference") Head to the reference section for full documentation of all classes and methods in the LangChain and LangChain Experimental Python packages. ### [Developer's guide](/docs/contributing) [​](\#developers-guide "Direct link to developers-guide") Check out the developer's guide for guidelines on contributing and help getting your dev environment set up. Head to the [Community navigator](/docs/community) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLM’s.
https://python.langchain.com/docs/integrations/document_transformers/
## Document transformers [ ## 📄️ AI21SemanticTextSplitter This example goes over how to use AI21SemanticTextSplitter in LangChain. ](https://python.langchain.com/docs/integrations/document_transformers/ai21_semantic_text_splitter/) [ ## 📄️ Beautiful Soup Beautiful Soup is a ](https://python.langchain.com/docs/integrations/document_transformers/beautiful_soup/) [ ## 📄️ Cross Encoder Reranker This notebook shows how to implement reranker in a retriever with your ](https://python.langchain.com/docs/integrations/document_transformers/cross_encoder_reranker/) [ ## 📄️ Doctran: extract properties We can extract useful features of documents using the ](https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties/) [ ## 📄️ Doctran: interrogate documents Documents used in a vector store knowledge base are typically stored in ](https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document/) [ ## 📄️ Doctran: language translation Comparing documents through embeddings has the benefit of working across ](https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document/) [ ## 📄️ Google Cloud Document AI Document AI is a document understanding platform from Google Cloud to ](https://python.langchain.com/docs/integrations/document_transformers/google_docai/) [ ## 📄️ Google Translate Google Translate is a multilingual ](https://python.langchain.com/docs/integrations/document_transformers/google_translate/) [ ## 📄️ HTML to text html2text is a Python package ](https://python.langchain.com/docs/integrations/document_transformers/html2text/) [ ## 📄️ Nuclia Nuclia automatically indexes your unstructured ](https://python.langchain.com/docs/integrations/document_transformers/nuclia_transformer/) [ ## 📄️ OpenAI metadata tagger It can often be useful to tag ingested documents with structured ](https://python.langchain.com/docs/integrations/document_transformers/openai_metadata_tagger/) [ ## 📄️ OpenVINO Reranker OpenVINO™ is an ](https://python.langchain.com/docs/integrations/document_transformers/openvino_rerank/) [ ## 📄️ VoyageAI Reranker Voyage AI provides cutting-edge ](https://python.langchain.com/docs/integrations/document_transformers/voyageai-reranker/)
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:41.728Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_transformers/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_transformers/", "description": null, "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4587", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"document_transformers\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:41 GMT", "etag": "W/\"f7ca34c6701bc25576c4c9c4ea985233\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::8ppqn-1713753581635-ecb871775086" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_transformers/", "property": "og:url" }, { "content": "Document transformers | 🦜️🔗 LangChain", "property": "og:title" } ], "title": "Document transformers | 🦜️🔗 LangChain" }
Document transformers 📄️ AI21SemanticTextSplitter This example goes over how to use AI21SemanticTextSplitter in LangChain. 📄️ Beautiful Soup Beautiful Soup is a 📄️ Cross Encoder Reranker This notebook shows how to implement reranker in a retriever with your 📄️ Doctran: extract properties We can extract useful features of documents using the 📄️ Doctran: interrogate documents Documents used in a vector store knowledge base are typically stored in 📄️ Doctran: language translation Comparing documents through embeddings has the benefit of working across 📄️ Google Cloud Document AI Document AI is a document understanding platform from Google Cloud to 📄️ Google Translate Google Translate is a multilingual 📄️ HTML to text html2text is a Python package 📄️ Nuclia Nuclia automatically indexes your unstructured 📄️ OpenAI metadata tagger It can often be useful to tag ingested documents with structured 📄️ OpenVINO Reranker OpenVINO™ is an 📄️ VoyageAI Reranker Voyage AI provides cutting-edge
https://python.langchain.com/docs/integrations/document_loaders/toml/
[TOML](https://en.wikipedia.org/wiki/TOML) is a file format for configuration files. It is intended to be easy to read and write, and is designed to map unambiguously to a dictionary. Its specification is open-source. `TOML` is implemented in many programming languages. The name `TOML` is an acronym for “Tom’s Obvious, Minimal Language” referring to its creator, Tom Preston-Werner. If you need to load `Toml` files, use the `TomlLoader`. ``` [Document(page_content='{"internal": {"creation_date": "2023-05-01", "updated_date": "2022-05-01", "release": ["release_type"], "min_endpoint_version": "some_semantic_version", "os_list": ["operating_system_list"]}, "rule": {"uuid": "some_uuid", "name": "Fake Rule Name", "description": "Fake description of rule", "query": "process where process.name : \\"somequery\\"\\n", "threat": [{"framework": "MITRE ATT&CK", "tactic": {"name": "Execution", "id": "TA0002", "reference": "https://attack.mitre.org/tactics/TA0002/"}}]}}', metadata={'source': 'example_data/fake_rule.toml'})] ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:41.871Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/toml/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/toml/", "description": "TOML is a file format for", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"toml\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:41 GMT", "etag": "W/\"0bb4102cef3e4f0b98613ce85d0952cd\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::6jffl-1713753581617-310f928cf1c9" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/toml/", "property": "og:url" }, { "content": "TOML | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "TOML is a file format for", "property": "og:description" } ], "title": "TOML | 🦜️🔗 LangChain" }
TOML is a file format for configuration files. It is intended to be easy to read and write, and is designed to map unambiguously to a dictionary. Its specification is open-source. TOML is implemented in many programming languages. The name TOML is an acronym for “Tom’s Obvious, Minimal Language” referring to its creator, Tom Preston-Werner. If you need to load Toml files, use the TomlLoader. [Document(page_content='{"internal": {"creation_date": "2023-05-01", "updated_date": "2022-05-01", "release": ["release_type"], "min_endpoint_version": "some_semantic_version", "os_list": ["operating_system_list"]}, "rule": {"uuid": "some_uuid", "name": "Fake Rule Name", "description": "Fake description of rule", "query": "process where process.name : \\"somequery\\"\\n", "threat": [{"framework": "MITRE ATT&CK", "tactic": {"name": "Execution", "id": "TA0002", "reference": "https://attack.mitre.org/tactics/TA0002/"}}]}}', metadata={'source': 'example_data/fake_rule.toml'})]
https://python.langchain.com/docs/integrations/document_transformers/beautiful_soup/
## Beautiful Soup > [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/) is a Python package for parsing HTML and XML documents (including having malformed markup, i.e. non-closed tags, so named after tag soup). It creates a parse tree for parsed pages that can be used to extract data from HTML,\[3\] which is useful for web scraping. `Beautiful Soup` offers fine-grained control over HTML content, enabling specific tag extraction, removal, and content cleaning. It’s suited for cases where you want to extract specific information and clean up the HTML content according to your needs. For example, we can scrape text content within `<p>, <li>, <div>, and <a>` tags from the HTML content: * `<p>`: The paragraph tag. It defines a paragraph in HTML and is used to group together related sentences and/or phrases. * `<li>`: The list item tag. It is used within ordered (`<ol>`) and unordered (`<ul>`) lists to define individual items within the list. * `<div>`: The division tag. It is a block-level element used to group other inline or block-level elements. * `<a>`: The anchor tag. It is used to define hyperlinks. ``` from langchain_community.document_loaders import AsyncChromiumLoaderfrom langchain_community.document_transformers import BeautifulSoupTransformer# Load HTMLloader = AsyncChromiumLoader(["https://www.wsj.com"])html = loader.load() ``` ``` # Transformbs_transformer = BeautifulSoupTransformer()docs_transformed = bs_transformer.transform_documents( html, tags_to_extract=["p", "li", "div", "a"]) ``` ``` docs_transformed[0].page_content[0:500] ``` ``` 'Conservative legal activists are challenging Amazon, Comcast and others using many of the same tools that helped kill affirmative-action programs in colleges.1,2099 min read U.S. stock indexes fell and government-bond prices climbed, after Moody’s lowered credit ratings for 10 smaller U.S. banks and said it was reviewing ratings for six larger ones. The Dow industrials dropped more than 150 points.3 min read Penn Entertainment’s Barstool Sportsbook app will be rebranded as ESPN Bet this fall as ' ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:42.428Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_transformers/beautiful_soup/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_transformers/beautiful_soup/", "description": "Beautiful Soup is a", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4406", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"beautiful_soup\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:41 GMT", "etag": "W/\"f48126978f7095792cbff73897513d30\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::k52mr-1713753581896-436f022e1b09" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_transformers/beautiful_soup/", "property": "og:url" }, { "content": "Beautiful Soup | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Beautiful Soup is a", "property": "og:description" } ], "title": "Beautiful Soup | 🦜️🔗 LangChain" }
Beautiful Soup Beautiful Soup is a Python package for parsing HTML and XML documents (including having malformed markup, i.e. non-closed tags, so named after tag soup). It creates a parse tree for parsed pages that can be used to extract data from HTML,[3] which is useful for web scraping. Beautiful Soup offers fine-grained control over HTML content, enabling specific tag extraction, removal, and content cleaning. It’s suited for cases where you want to extract specific information and clean up the HTML content according to your needs. For example, we can scrape text content within <p>, <li>, <div>, and <a> tags from the HTML content: <p>: The paragraph tag. It defines a paragraph in HTML and is used to group together related sentences and/or phrases. <li>: The list item tag. It is used within ordered (<ol>) and unordered (<ul>) lists to define individual items within the list. <div>: The division tag. It is a block-level element used to group other inline or block-level elements. <a>: The anchor tag. It is used to define hyperlinks. from langchain_community.document_loaders import AsyncChromiumLoader from langchain_community.document_transformers import BeautifulSoupTransformer # Load HTML loader = AsyncChromiumLoader(["https://www.wsj.com"]) html = loader.load() # Transform bs_transformer = BeautifulSoupTransformer() docs_transformed = bs_transformer.transform_documents( html, tags_to_extract=["p", "li", "div", "a"] ) docs_transformed[0].page_content[0:500] 'Conservative legal activists are challenging Amazon, Comcast and others using many of the same tools that helped kill affirmative-action programs in colleges.1,2099 min read U.S. stock indexes fell and government-bond prices climbed, after Moody’s lowered credit ratings for 10 smaller U.S. banks and said it was reviewing ratings for six larger ones. The Dow industrials dropped more than 150 points.3 min read Penn Entertainment’s Barstool Sportsbook app will be rebranded as ESPN Bet this fall as '
https://python.langchain.com/docs/integrations/document_transformers/ai21_semantic_text_splitter/
## AI21SemanticTextSplitter This example goes over how to use AI21SemanticTextSplitter in LangChain. ## Installation[​](#installation "Direct link to Installation") ``` pip install langchain-ai21 ``` ## Environment Setup[​](#environment-setup "Direct link to Environment Setup") We’ll need to get a AI21 API key and set the AI21\_API\_KEY environment variable: ``` import osfrom getpass import getpassos.environ["AI21_API_KEY"] = getpass() ``` ## Example Usages[​](#example-usages "Direct link to Example Usages") ### Splitting text by semantic meaning[​](#splitting-text-by-semantic-meaning "Direct link to Splitting text by semantic meaning") This example shows how to use AI21SemanticTextSplitter to split a text into chunks based on semantic meaning. ``` from langchain_ai21 import AI21SemanticTextSplitterTEXT = ( "We’ve all experienced reading long, tedious, and boring pieces of text - financial reports, " "legal documents, or terms and conditions (though, who actually reads those terms and conditions to be honest?).\n" "Imagine a company that employs hundreds of thousands of employees. In today's information " "overload age, nearly 30% of the workday is spent dealing with documents. There's no surprise " "here, given that some of these documents are long and convoluted on purpose (did you know that " "reading through all your privacy policies would take almost a quarter of a year?). Aside from " "inefficiency, workers may simply refrain from reading some documents (for example, Only 16% of " "Employees Read Their Employment Contracts Entirely Before Signing!).\nThis is where AI-driven summarization " "tools can be helpful: instead of reading entire documents, which is tedious and time-consuming, " "users can (ideally) quickly extract relevant information from a text. With large language models, " "the development of those tools is easier than ever, and you can offer your users a summary that is " "specifically tailored to their preferences.\nLarge language models naturally follow patterns in input " "(prompt), and provide coherent completion that follows the same patterns. For that, we want to feed " 'them with several examples in the input ("few-shot prompt"), so they can follow through. ' "The process of creating the correct prompt for your problem is called prompt engineering, " "and you can read more about it here.")semantic_text_splitter = AI21SemanticTextSplitter()chunks = semantic_text_splitter.split_text(TEXT)print(f"The text has been split into {len(chunks)} chunks.")for chunk in chunks: print(chunk) print("====") ``` ### Splitting text by semantic meaning with merge[​](#splitting-text-by-semantic-meaning-with-merge "Direct link to Splitting text by semantic meaning with merge") This example shows how to use AI21SemanticTextSplitter to split a text into chunks based on semantic meaning, then merging the chunks based on `chunk_size`. ``` from langchain_ai21 import AI21SemanticTextSplitterTEXT = ( "We’ve all experienced reading long, tedious, and boring pieces of text - financial reports, " "legal documents, or terms and conditions (though, who actually reads those terms and conditions to be honest?).\n" "Imagine a company that employs hundreds of thousands of employees. In today's information " "overload age, nearly 30% of the workday is spent dealing with documents. There's no surprise " "here, given that some of these documents are long and convoluted on purpose (did you know that " "reading through all your privacy policies would take almost a quarter of a year?). Aside from " "inefficiency, workers may simply refrain from reading some documents (for example, Only 16% of " "Employees Read Their Employment Contracts Entirely Before Signing!).\nThis is where AI-driven summarization " "tools can be helpful: instead of reading entire documents, which is tedious and time-consuming, " "users can (ideally) quickly extract relevant information from a text. With large language models, " "the development of those tools is easier than ever, and you can offer your users a summary that is " "specifically tailored to their preferences.\nLarge language models naturally follow patterns in input " "(prompt), and provide coherent completion that follows the same patterns. For that, we want to feed " 'them with several examples in the input ("few-shot prompt"), so they can follow through. ' "The process of creating the correct prompt for your problem is called prompt engineering, " "and you can read more about it here.")semantic_text_splitter_chunks = AI21SemanticTextSplitter(chunk_size=1000)chunks = semantic_text_splitter_chunks.split_text(TEXT)print(f"The text has been split into {len(chunks)} chunks.")for chunk in chunks: print(chunk) print("====") ``` ### Splitting text to documents[​](#splitting-text-to-documents "Direct link to Splitting text to documents") This example shows how to use AI21SemanticTextSplitter to split a text into Documents based on semantic meaning. The metadata will contain a type for each document. ``` from langchain_ai21 import AI21SemanticTextSplitterTEXT = ( "We’ve all experienced reading long, tedious, and boring pieces of text - financial reports, " "legal documents, or terms and conditions (though, who actually reads those terms and conditions to be honest?).\n" "Imagine a company that employs hundreds of thousands of employees. In today's information " "overload age, nearly 30% of the workday is spent dealing with documents. There's no surprise " "here, given that some of these documents are long and convoluted on purpose (did you know that " "reading through all your privacy policies would take almost a quarter of a year?). Aside from " "inefficiency, workers may simply refrain from reading some documents (for example, Only 16% of " "Employees Read Their Employment Contracts Entirely Before Signing!).\nThis is where AI-driven summarization " "tools can be helpful: instead of reading entire documents, which is tedious and time-consuming, " "users can (ideally) quickly extract relevant information from a text. With large language models, " "the development of those tools is easier than ever, and you can offer your users a summary that is " "specifically tailored to their preferences.\nLarge language models naturally follow patterns in input " "(prompt), and provide coherent completion that follows the same patterns. For that, we want to feed " 'them with several examples in the input ("few-shot prompt"), so they can follow through. ' "The process of creating the correct prompt for your problem is called prompt engineering, " "and you can read more about it here.")semantic_text_splitter = AI21SemanticTextSplitter()documents = semantic_text_splitter.split_text_to_documents(TEXT)print(f"The text has been split into {len(documents)} Documents.")for doc in documents: print(f"type: {doc.metadata['source_type']}") print(f"text: {doc.page_content}") print("====") ``` ### Creating Documents with Metadata[​](#creating-documents-with-metadata "Direct link to Creating Documents with Metadata") This example shows how to use AI21SemanticTextSplitter to create Documents from texts, and adding custom Metadata to each Document. ``` from langchain_ai21 import AI21SemanticTextSplitterTEXT = ( "We’ve all experienced reading long, tedious, and boring pieces of text - financial reports, " "legal documents, or terms and conditions (though, who actually reads those terms and conditions to be honest?).\n" "Imagine a company that employs hundreds of thousands of employees. In today's information " "overload age, nearly 30% of the workday is spent dealing with documents. There's no surprise " "here, given that some of these documents are long and convoluted on purpose (did you know that " "reading through all your privacy policies would take almost a quarter of a year?). Aside from " "inefficiency, workers may simply refrain from reading some documents (for example, Only 16% of " "Employees Read Their Employment Contracts Entirely Before Signing!).\nThis is where AI-driven summarization " "tools can be helpful: instead of reading entire documents, which is tedious and time-consuming, " "users can (ideally) quickly extract relevant information from a text. With large language models, " "the development of those tools is easier than ever, and you can offer your users a summary that is " "specifically tailored to their preferences.\nLarge language models naturally follow patterns in input " "(prompt), and provide coherent completion that follows the same patterns. For that, we want to feed " 'them with several examples in the input ("few-shot prompt"), so they can follow through. ' "The process of creating the correct prompt for your problem is called prompt engineering, " "and you can read more about it here.")semantic_text_splitter = AI21SemanticTextSplitter()texts = [TEXT]documents = semantic_text_splitter.create_documents( texts=texts, metadatas=[{"pikachu": "pika pika"}])print(f"The text has been split into {len(documents)} Documents.")for doc in documents: print(f"metadata: {doc.metadata}") print(f"text: {doc.page_content}") print("====") ``` ### Splitting text to documents with start index[​](#splitting-text-to-documents-with-start-index "Direct link to Splitting text to documents with start index") This example shows how to use AI21SemanticTextSplitter to split a text into Documents based on semantic meaning. The metadata will contain a start index for each document. **Note** that the start index provides an indication of the order of the chunks rather than the actual start index for each chunk. ``` from langchain_ai21 import AI21SemanticTextSplitterTEXT = ( "We’ve all experienced reading long, tedious, and boring pieces of text - financial reports, " "legal documents, or terms and conditions (though, who actually reads those terms and conditions to be honest?).\n" "Imagine a company that employs hundreds of thousands of employees. In today's information " "overload age, nearly 30% of the workday is spent dealing with documents. There's no surprise " "here, given that some of these documents are long and convoluted on purpose (did you know that " "reading through all your privacy policies would take almost a quarter of a year?). Aside from " "inefficiency, workers may simply refrain from reading some documents (for example, Only 16% of " "Employees Read Their Employment Contracts Entirely Before Signing!).\nThis is where AI-driven summarization " "tools can be helpful: instead of reading entire documents, which is tedious and time-consuming, " "users can (ideally) quickly extract relevant information from a text. With large language models, " "the development of those tools is easier than ever, and you can offer your users a summary that is " "specifically tailored to their preferences.\nLarge language models naturally follow patterns in input " "(prompt), and provide coherent completion that follows the same patterns. For that, we want to feed " 'them with several examples in the input ("few-shot prompt"), so they can follow through. ' "The process of creating the correct prompt for your problem is called prompt engineering, " "and you can read more about it here.")semantic_text_splitter = AI21SemanticTextSplitter(add_start_index=True)documents = semantic_text_splitter.create_documents(texts=[TEXT])print(f"The text has been split into {len(documents)} Documents.")for doc in documents: print(f"start_index: {doc.metadata['start_index']}") print(f"text: {doc.page_content}") print("====") ``` ### Splitting documents[​](#splitting-documents "Direct link to Splitting documents") This example shows how to use AI21SemanticTextSplitter to split a list of Documents into chunks based on semantic meaning. ``` from langchain_ai21 import AI21SemanticTextSplitterfrom langchain_core.documents import DocumentTEXT = ( "We’ve all experienced reading long, tedious, and boring pieces of text - financial reports, " "legal documents, or terms and conditions (though, who actually reads those terms and conditions to be honest?).\n" "Imagine a company that employs hundreds of thousands of employees. In today's information " "overload age, nearly 30% of the workday is spent dealing with documents. There's no surprise " "here, given that some of these documents are long and convoluted on purpose (did you know that " "reading through all your privacy policies would take almost a quarter of a year?). Aside from " "inefficiency, workers may simply refrain from reading some documents (for example, Only 16% of " "Employees Read Their Employment Contracts Entirely Before Signing!).\nThis is where AI-driven summarization " "tools can be helpful: instead of reading entire documents, which is tedious and time-consuming, " "users can (ideally) quickly extract relevant information from a text. With large language models, " "the development of those tools is easier than ever, and you can offer your users a summary that is " "specifically tailored to their preferences.\nLarge language models naturally follow patterns in input " "(prompt), and provide coherent completion that follows the same patterns. For that, we want to feed " 'them with several examples in the input ("few-shot prompt"), so they can follow through. ' "The process of creating the correct prompt for your problem is called prompt engineering, " "and you can read more about it here.")semantic_text_splitter = AI21SemanticTextSplitter()document = Document(page_content=TEXT, metadata={"hello": "goodbye"})documents = semantic_text_splitter.split_documents([document])print(f"The document list has been split into {len(documents)} Documents.")for doc in documents: print(f"text: {doc.page_content}") print(f"metadata: {doc.metadata}") print("====") ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:42.077Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_transformers/ai21_semantic_text_splitter/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_transformers/ai21_semantic_text_splitter/", "description": "This example goes over how to use AI21SemanticTextSplitter in LangChain.", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3474", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"ai21_semantic_text_splitter\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:41 GMT", "etag": "W/\"b84a8a1c4a78560cd7f04dd75e7b0995\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::8tt4g-1713753581734-9c1c74f4bff8" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_transformers/ai21_semantic_text_splitter/", "property": "og:url" }, { "content": "AI21SemanticTextSplitter | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This example goes over how to use AI21SemanticTextSplitter in LangChain.", "property": "og:description" } ], "title": "AI21SemanticTextSplitter | 🦜️🔗 LangChain" }
AI21SemanticTextSplitter This example goes over how to use AI21SemanticTextSplitter in LangChain. Installation​ pip install langchain-ai21 Environment Setup​ We’ll need to get a AI21 API key and set the AI21_API_KEY environment variable: import os from getpass import getpass os.environ["AI21_API_KEY"] = getpass() Example Usages​ Splitting text by semantic meaning​ This example shows how to use AI21SemanticTextSplitter to split a text into chunks based on semantic meaning. from langchain_ai21 import AI21SemanticTextSplitter TEXT = ( "We’ve all experienced reading long, tedious, and boring pieces of text - financial reports, " "legal documents, or terms and conditions (though, who actually reads those terms and conditions to be honest?).\n" "Imagine a company that employs hundreds of thousands of employees. In today's information " "overload age, nearly 30% of the workday is spent dealing with documents. There's no surprise " "here, given that some of these documents are long and convoluted on purpose (did you know that " "reading through all your privacy policies would take almost a quarter of a year?). Aside from " "inefficiency, workers may simply refrain from reading some documents (for example, Only 16% of " "Employees Read Their Employment Contracts Entirely Before Signing!).\nThis is where AI-driven summarization " "tools can be helpful: instead of reading entire documents, which is tedious and time-consuming, " "users can (ideally) quickly extract relevant information from a text. With large language models, " "the development of those tools is easier than ever, and you can offer your users a summary that is " "specifically tailored to their preferences.\nLarge language models naturally follow patterns in input " "(prompt), and provide coherent completion that follows the same patterns. For that, we want to feed " 'them with several examples in the input ("few-shot prompt"), so they can follow through. ' "The process of creating the correct prompt for your problem is called prompt engineering, " "and you can read more about it here." ) semantic_text_splitter = AI21SemanticTextSplitter() chunks = semantic_text_splitter.split_text(TEXT) print(f"The text has been split into {len(chunks)} chunks.") for chunk in chunks: print(chunk) print("====") Splitting text by semantic meaning with merge​ This example shows how to use AI21SemanticTextSplitter to split a text into chunks based on semantic meaning, then merging the chunks based on chunk_size. from langchain_ai21 import AI21SemanticTextSplitter TEXT = ( "We’ve all experienced reading long, tedious, and boring pieces of text - financial reports, " "legal documents, or terms and conditions (though, who actually reads those terms and conditions to be honest?).\n" "Imagine a company that employs hundreds of thousands of employees. In today's information " "overload age, nearly 30% of the workday is spent dealing with documents. There's no surprise " "here, given that some of these documents are long and convoluted on purpose (did you know that " "reading through all your privacy policies would take almost a quarter of a year?). Aside from " "inefficiency, workers may simply refrain from reading some documents (for example, Only 16% of " "Employees Read Their Employment Contracts Entirely Before Signing!).\nThis is where AI-driven summarization " "tools can be helpful: instead of reading entire documents, which is tedious and time-consuming, " "users can (ideally) quickly extract relevant information from a text. With large language models, " "the development of those tools is easier than ever, and you can offer your users a summary that is " "specifically tailored to their preferences.\nLarge language models naturally follow patterns in input " "(prompt), and provide coherent completion that follows the same patterns. For that, we want to feed " 'them with several examples in the input ("few-shot prompt"), so they can follow through. ' "The process of creating the correct prompt for your problem is called prompt engineering, " "and you can read more about it here." ) semantic_text_splitter_chunks = AI21SemanticTextSplitter(chunk_size=1000) chunks = semantic_text_splitter_chunks.split_text(TEXT) print(f"The text has been split into {len(chunks)} chunks.") for chunk in chunks: print(chunk) print("====") Splitting text to documents​ This example shows how to use AI21SemanticTextSplitter to split a text into Documents based on semantic meaning. The metadata will contain a type for each document. from langchain_ai21 import AI21SemanticTextSplitter TEXT = ( "We’ve all experienced reading long, tedious, and boring pieces of text - financial reports, " "legal documents, or terms and conditions (though, who actually reads those terms and conditions to be honest?).\n" "Imagine a company that employs hundreds of thousands of employees. In today's information " "overload age, nearly 30% of the workday is spent dealing with documents. There's no surprise " "here, given that some of these documents are long and convoluted on purpose (did you know that " "reading through all your privacy policies would take almost a quarter of a year?). Aside from " "inefficiency, workers may simply refrain from reading some documents (for example, Only 16% of " "Employees Read Their Employment Contracts Entirely Before Signing!).\nThis is where AI-driven summarization " "tools can be helpful: instead of reading entire documents, which is tedious and time-consuming, " "users can (ideally) quickly extract relevant information from a text. With large language models, " "the development of those tools is easier than ever, and you can offer your users a summary that is " "specifically tailored to their preferences.\nLarge language models naturally follow patterns in input " "(prompt), and provide coherent completion that follows the same patterns. For that, we want to feed " 'them with several examples in the input ("few-shot prompt"), so they can follow through. ' "The process of creating the correct prompt for your problem is called prompt engineering, " "and you can read more about it here." ) semantic_text_splitter = AI21SemanticTextSplitter() documents = semantic_text_splitter.split_text_to_documents(TEXT) print(f"The text has been split into {len(documents)} Documents.") for doc in documents: print(f"type: {doc.metadata['source_type']}") print(f"text: {doc.page_content}") print("====") Creating Documents with Metadata​ This example shows how to use AI21SemanticTextSplitter to create Documents from texts, and adding custom Metadata to each Document. from langchain_ai21 import AI21SemanticTextSplitter TEXT = ( "We’ve all experienced reading long, tedious, and boring pieces of text - financial reports, " "legal documents, or terms and conditions (though, who actually reads those terms and conditions to be honest?).\n" "Imagine a company that employs hundreds of thousands of employees. In today's information " "overload age, nearly 30% of the workday is spent dealing with documents. There's no surprise " "here, given that some of these documents are long and convoluted on purpose (did you know that " "reading through all your privacy policies would take almost a quarter of a year?). Aside from " "inefficiency, workers may simply refrain from reading some documents (for example, Only 16% of " "Employees Read Their Employment Contracts Entirely Before Signing!).\nThis is where AI-driven summarization " "tools can be helpful: instead of reading entire documents, which is tedious and time-consuming, " "users can (ideally) quickly extract relevant information from a text. With large language models, " "the development of those tools is easier than ever, and you can offer your users a summary that is " "specifically tailored to their preferences.\nLarge language models naturally follow patterns in input " "(prompt), and provide coherent completion that follows the same patterns. For that, we want to feed " 'them with several examples in the input ("few-shot prompt"), so they can follow through. ' "The process of creating the correct prompt for your problem is called prompt engineering, " "and you can read more about it here." ) semantic_text_splitter = AI21SemanticTextSplitter() texts = [TEXT] documents = semantic_text_splitter.create_documents( texts=texts, metadatas=[{"pikachu": "pika pika"}] ) print(f"The text has been split into {len(documents)} Documents.") for doc in documents: print(f"metadata: {doc.metadata}") print(f"text: {doc.page_content}") print("====") Splitting text to documents with start index​ This example shows how to use AI21SemanticTextSplitter to split a text into Documents based on semantic meaning. The metadata will contain a start index for each document. Note that the start index provides an indication of the order of the chunks rather than the actual start index for each chunk. from langchain_ai21 import AI21SemanticTextSplitter TEXT = ( "We’ve all experienced reading long, tedious, and boring pieces of text - financial reports, " "legal documents, or terms and conditions (though, who actually reads those terms and conditions to be honest?).\n" "Imagine a company that employs hundreds of thousands of employees. In today's information " "overload age, nearly 30% of the workday is spent dealing with documents. There's no surprise " "here, given that some of these documents are long and convoluted on purpose (did you know that " "reading through all your privacy policies would take almost a quarter of a year?). Aside from " "inefficiency, workers may simply refrain from reading some documents (for example, Only 16% of " "Employees Read Their Employment Contracts Entirely Before Signing!).\nThis is where AI-driven summarization " "tools can be helpful: instead of reading entire documents, which is tedious and time-consuming, " "users can (ideally) quickly extract relevant information from a text. With large language models, " "the development of those tools is easier than ever, and you can offer your users a summary that is " "specifically tailored to their preferences.\nLarge language models naturally follow patterns in input " "(prompt), and provide coherent completion that follows the same patterns. For that, we want to feed " 'them with several examples in the input ("few-shot prompt"), so they can follow through. ' "The process of creating the correct prompt for your problem is called prompt engineering, " "and you can read more about it here." ) semantic_text_splitter = AI21SemanticTextSplitter(add_start_index=True) documents = semantic_text_splitter.create_documents(texts=[TEXT]) print(f"The text has been split into {len(documents)} Documents.") for doc in documents: print(f"start_index: {doc.metadata['start_index']}") print(f"text: {doc.page_content}") print("====") Splitting documents​ This example shows how to use AI21SemanticTextSplitter to split a list of Documents into chunks based on semantic meaning. from langchain_ai21 import AI21SemanticTextSplitter from langchain_core.documents import Document TEXT = ( "We’ve all experienced reading long, tedious, and boring pieces of text - financial reports, " "legal documents, or terms and conditions (though, who actually reads those terms and conditions to be honest?).\n" "Imagine a company that employs hundreds of thousands of employees. In today's information " "overload age, nearly 30% of the workday is spent dealing with documents. There's no surprise " "here, given that some of these documents are long and convoluted on purpose (did you know that " "reading through all your privacy policies would take almost a quarter of a year?). Aside from " "inefficiency, workers may simply refrain from reading some documents (for example, Only 16% of " "Employees Read Their Employment Contracts Entirely Before Signing!).\nThis is where AI-driven summarization " "tools can be helpful: instead of reading entire documents, which is tedious and time-consuming, " "users can (ideally) quickly extract relevant information from a text. With large language models, " "the development of those tools is easier than ever, and you can offer your users a summary that is " "specifically tailored to their preferences.\nLarge language models naturally follow patterns in input " "(prompt), and provide coherent completion that follows the same patterns. For that, we want to feed " 'them with several examples in the input ("few-shot prompt"), so they can follow through. ' "The process of creating the correct prompt for your problem is called prompt engineering, " "and you can read more about it here." ) semantic_text_splitter = AI21SemanticTextSplitter() document = Document(page_content=TEXT, metadata={"hello": "goodbye"}) documents = semantic_text_splitter.split_documents([document]) print(f"The document list has been split into {len(documents)} Documents.") for doc in documents: print(f"text: {doc.page_content}") print(f"metadata: {doc.metadata}") print("====") Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/trello/
## Trello > [Trello](https://www.atlassian.com/software/trello) is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a “board” where users can create lists and cards to represent their tasks and activities. The TrelloLoader allows you to load cards from a Trello board and is implemented on top of [py-trello](https://pypi.org/project/py-trello/) This currently supports `api_key/token` only. 1. Credentials generation: [https://trello.com/power-ups/admin/](https://trello.com/power-ups/admin/) 2. Click in the manual token generation link to get the token. To specify the API key and token you can either set the environment variables `TRELLO_API_KEY` and `TRELLO_TOKEN` or you can pass `api_key` and `token` directly into the `from_credentials` convenience constructor method. This loader allows you to provide the board name to pull in the corresponding cards into Document objects. Notice that the board “name” is also called “title” in oficial documentation: [https://support.atlassian.com/trello/docs/changing-a-boards-title-and-description/](https://support.atlassian.com/trello/docs/changing-a-boards-title-and-description/) You can also specify several load parameters to include / remove different fields both from the document page\_content properties and metadata. ## Features[​](#features "Direct link to Features") * Load cards from a Trello board. * Filter cards based on their status (open or closed). * Include card names, comments, and checklists in the loaded documents. * Customize the additional metadata fields to include in the document. By default all card fields are included for the full text page\_content and metadata accordinly. ``` %pip install --upgrade --quiet py-trello beautifulsoup4 lxml ``` ``` # If you have already set the API key and token using environment variables,# you can skip this cell and comment out the `api_key` and `token` named arguments# in the initialization steps below.from getpass import getpassAPI_KEY = getpass()TOKEN = getpass() ``` ``` from langchain_community.document_loaders import TrelloLoader# Get the open cards from "Awesome Board"loader = TrelloLoader.from_credentials( "Awesome Board", api_key=API_KEY, token=TOKEN, card_filter="open",)documents = loader.load()print(documents[0].page_content)print(documents[0].metadata) ``` ``` Review Tech partner pagesComments:{'title': 'Review Tech partner pages', 'id': '6475357890dc8d17f73f2dcc', 'url': 'https://trello.com/c/b0OTZwkZ/1-review-tech-partner-pages', 'labels': ['Demand Marketing'], 'list': 'Done', 'closed': False, 'due_date': ''} ``` ``` # Get all the cards from "Awesome Board" but only include the# card list(column) as extra metadata.loader = TrelloLoader.from_credentials( "Awesome Board", api_key=API_KEY, token=TOKEN, extra_metadata=("list"),)documents = loader.load()print(documents[0].page_content)print(documents[0].metadata) ``` ``` Review Tech partner pagesComments:{'title': 'Review Tech partner pages', 'id': '6475357890dc8d17f73f2dcc', 'url': 'https://trello.com/c/b0OTZwkZ/1-review-tech-partner-pages', 'list': 'Done'} ``` ``` # Get the cards from "Another Board" and exclude the card name,# checklist and comments from the Document page_content text.loader = TrelloLoader.from_credentials( "test", api_key=API_KEY, token=TOKEN, include_card_name=False, include_checklist=False, include_comments=False,)documents = loader.load()print("Document: " + documents[0].page_content)print(documents[0].metadata) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:42.852Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/trello/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/trello/", "description": "Trello is a web-based", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3476", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"trello\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:41 GMT", "etag": "W/\"9ee90a4fa18285dcb157a3948050e414\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::54c7l-1713753581988-edcb1af9e020" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/trello/", "property": "og:url" }, { "content": "Trello | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Trello is a web-based", "property": "og:description" } ], "title": "Trello | 🦜️🔗 LangChain" }
Trello Trello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a “board” where users can create lists and cards to represent their tasks and activities. The TrelloLoader allows you to load cards from a Trello board and is implemented on top of py-trello This currently supports api_key/token only. Credentials generation: https://trello.com/power-ups/admin/ Click in the manual token generation link to get the token. To specify the API key and token you can either set the environment variables TRELLO_API_KEY and TRELLO_TOKEN or you can pass api_key and token directly into the from_credentials convenience constructor method. This loader allows you to provide the board name to pull in the corresponding cards into Document objects. Notice that the board “name” is also called “title” in oficial documentation: https://support.atlassian.com/trello/docs/changing-a-boards-title-and-description/ You can also specify several load parameters to include / remove different fields both from the document page_content properties and metadata. Features​ Load cards from a Trello board. Filter cards based on their status (open or closed). Include card names, comments, and checklists in the loaded documents. Customize the additional metadata fields to include in the document. By default all card fields are included for the full text page_content and metadata accordinly. %pip install --upgrade --quiet py-trello beautifulsoup4 lxml # If you have already set the API key and token using environment variables, # you can skip this cell and comment out the `api_key` and `token` named arguments # in the initialization steps below. from getpass import getpass API_KEY = getpass() TOKEN = getpass() from langchain_community.document_loaders import TrelloLoader # Get the open cards from "Awesome Board" loader = TrelloLoader.from_credentials( "Awesome Board", api_key=API_KEY, token=TOKEN, card_filter="open", ) documents = loader.load() print(documents[0].page_content) print(documents[0].metadata) Review Tech partner pages Comments: {'title': 'Review Tech partner pages', 'id': '6475357890dc8d17f73f2dcc', 'url': 'https://trello.com/c/b0OTZwkZ/1-review-tech-partner-pages', 'labels': ['Demand Marketing'], 'list': 'Done', 'closed': False, 'due_date': ''} # Get all the cards from "Awesome Board" but only include the # card list(column) as extra metadata. loader = TrelloLoader.from_credentials( "Awesome Board", api_key=API_KEY, token=TOKEN, extra_metadata=("list"), ) documents = loader.load() print(documents[0].page_content) print(documents[0].metadata) Review Tech partner pages Comments: {'title': 'Review Tech partner pages', 'id': '6475357890dc8d17f73f2dcc', 'url': 'https://trello.com/c/b0OTZwkZ/1-review-tech-partner-pages', 'list': 'Done'} # Get the cards from "Another Board" and exclude the card name, # checklist and comments from the Document page_content text. loader = TrelloLoader.from_credentials( "test", api_key=API_KEY, token=TOKEN, include_card_name=False, include_checklist=False, include_comments=False, ) documents = loader.load() print("Document: " + documents[0].page_content) print(documents[0].metadata)
https://python.langchain.com/docs/integrations/document_loaders/tsv/
You can also load the table using the `UnstructuredTSVLoader`. One advantage of using `UnstructuredTSVLoader` is that if you use it in `"elements"` mode, an HTML representation of the table will be available in the metadata. ``` <table border="1" class="dataframe"> <tbody> <tr> <td>Nationals, 81.34, 98</td> </tr> <tr> <td>Reds, 82.20, 97</td> </tr> <tr> <td>Yankees, 197.96, 95</td> </tr> <tr> <td>Giants, 117.62, 94</td> </tr> <tr> <td>Braves, 83.31, 94</td> </tr> <tr> <td>Athletics, 55.37, 94</td> </tr> <tr> <td>Rangers, 120.51, 93</td> </tr> <tr> <td>Orioles, 81.43, 93</td> </tr> <tr> <td>Rays, 64.17, 90</td> </tr> <tr> <td>Angels, 154.49, 89</td> </tr> <tr> <td>Tigers, 132.30, 88</td> </tr> <tr> <td>Cardinals, 110.30, 88</td> </tr> <tr> <td>Dodgers, 95.14, 86</td> </tr> <tr> <td>White Sox, 96.92, 85</td> </tr> <tr> <td>Brewers, 97.65, 83</td> </tr> <tr> <td>Phillies, 174.54, 81</td> </tr> <tr> <td>Diamondbacks, 74.28, 81</td> </tr> <tr> <td>Pirates, 63.43, 79</td> </tr> <tr> <td>Padres, 55.24, 76</td> </tr> <tr> <td>Mariners, 81.97, 75</td> </tr> <tr> <td>Mets, 93.35, 74</td> </tr> <tr> <td>Blue Jays, 75.48, 73</td> </tr> <tr> <td>Royals, 60.91, 72</td> </tr> <tr> <td>Marlins, 118.07, 69</td> </tr> <tr> <td>Red Sox, 173.18, 69</td> </tr> <tr> <td>Indians, 78.43, 68</td> </tr> <tr> <td>Twins, 94.08, 66</td> </tr> <tr> <td>Rockies, 78.06, 64</td> </tr> <tr> <td>Cubs, 88.19, 61</td> </tr> <tr> <td>Astros, 60.65, 55</td> </tr> </tbody></table> ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:42.985Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/tsv/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/tsv/", "description": "A [tab-separated values", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"tsv\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:42 GMT", "etag": "W/\"9ac0704b08c9abcb14a322c8e2acfa1a\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::jfrrz-1713753582065-e9ee92174cfc" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/tsv/", "property": "og:url" }, { "content": "TSV | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "A [tab-separated values", "property": "og:description" } ], "title": "TSV | 🦜️🔗 LangChain" }
You can also load the table using the UnstructuredTSVLoader. One advantage of using UnstructuredTSVLoader is that if you use it in "elements" mode, an HTML representation of the table will be available in the metadata. <table border="1" class="dataframe"> <tbody> <tr> <td>Nationals, 81.34, 98</td> </tr> <tr> <td>Reds, 82.20, 97</td> </tr> <tr> <td>Yankees, 197.96, 95</td> </tr> <tr> <td>Giants, 117.62, 94</td> </tr> <tr> <td>Braves, 83.31, 94</td> </tr> <tr> <td>Athletics, 55.37, 94</td> </tr> <tr> <td>Rangers, 120.51, 93</td> </tr> <tr> <td>Orioles, 81.43, 93</td> </tr> <tr> <td>Rays, 64.17, 90</td> </tr> <tr> <td>Angels, 154.49, 89</td> </tr> <tr> <td>Tigers, 132.30, 88</td> </tr> <tr> <td>Cardinals, 110.30, 88</td> </tr> <tr> <td>Dodgers, 95.14, 86</td> </tr> <tr> <td>White Sox, 96.92, 85</td> </tr> <tr> <td>Brewers, 97.65, 83</td> </tr> <tr> <td>Phillies, 174.54, 81</td> </tr> <tr> <td>Diamondbacks, 74.28, 81</td> </tr> <tr> <td>Pirates, 63.43, 79</td> </tr> <tr> <td>Padres, 55.24, 76</td> </tr> <tr> <td>Mariners, 81.97, 75</td> </tr> <tr> <td>Mets, 93.35, 74</td> </tr> <tr> <td>Blue Jays, 75.48, 73</td> </tr> <tr> <td>Royals, 60.91, 72</td> </tr> <tr> <td>Marlins, 118.07, 69</td> </tr> <tr> <td>Red Sox, 173.18, 69</td> </tr> <tr> <td>Indians, 78.43, 68</td> </tr> <tr> <td>Twins, 94.08, 66</td> </tr> <tr> <td>Rockies, 78.06, 64</td> </tr> <tr> <td>Cubs, 88.19, 61</td> </tr> <tr> <td>Astros, 60.65, 55</td> </tr> </tbody> </table>
https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties/
## Doctran: extract properties We can extract useful features of documents using the [Doctran](https://github.com/psychic-api/doctran) library, which uses OpenAI’s function calling feature to extract specific metadata. Extracting metadata from documents is helpful for a variety of tasks, including: * **Classification:** classifying documents into different categories * **Data mining:** Extract structured data that can be used for data analysis * **Style transfer:** Change the way text is written to more closely match expected user input, improving vector search results ``` %pip install --upgrade --quiet doctran ``` ``` import jsonfrom langchain_community.document_transformers import DoctranPropertyExtractorfrom langchain_core.documents import Document ``` ``` from dotenv import load_dotenvload_dotenv() ``` ## Input[​](#input "Direct link to Input") This is the document we’ll extract properties from. ``` sample_text = """[Generated with ChatGPT]Confidential Document - For Internal Use OnlyDate: July 1, 2023Subject: Updates and Discussions on Various TopicsDear Team,I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.Security and Privacy MeasuresAs part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: john.doe@example.com) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at security@example.com.HR Updates and Employee BenefitsRecently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: michael.johnson@example.com).Marketing Initiatives and CampaignsOur marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.Research and Development ProjectsIn our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: david.rodriguez@example.com) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.Thank you for your attention, and let's continue to work together to achieve our goals.Best regards,Jason FanCofounder & CEOPsychicjason@psychic.dev"""print(sample_text) ``` ``` [Generated with ChatGPT]Confidential Document - For Internal Use OnlyDate: July 1, 2023Subject: Updates and Discussions on Various TopicsDear Team,I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.Security and Privacy MeasuresAs part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: john.doe@example.com) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at security@example.com.HR Updates and Employee BenefitsRecently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: michael.johnson@example.com).Marketing Initiatives and CampaignsOur marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.Research and Development ProjectsIn our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: david.rodriguez@example.com) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.Thank you for your attention, and let's continue to work together to achieve our goals.Best regards,Jason FanCofounder & CEOPsychicjason@psychic.dev ``` ``` documents = [Document(page_content=sample_text)]properties = [ { "name": "category", "description": "What type of email this is.", "type": "string", "enum": ["update", "action_item", "customer_feedback", "announcement", "other"], "required": True, }, { "name": "mentions", "description": "A list of all people mentioned in this email.", "type": "array", "items": { "name": "full_name", "description": "The full name of the person mentioned.", "type": "string", }, "required": True, }, { "name": "eli5", "description": "Explain this email to me like I'm 5 years old.", "type": "string", "required": True, },]property_extractor = DoctranPropertyExtractor(properties=properties) ``` ## Output[​](#output "Direct link to Output") After extracting properties from a document, the result will be returned as a new document with properties provided in the metadata ``` extracted_document = property_extractor.transform_documents( documents, properties=properties) ``` ``` print(json.dumps(extracted_document[0].metadata, indent=2)) ``` ``` { "extracted_properties": { "category": "update", "mentions": [ "John Doe", "Jane Smith", "Michael Johnson", "Sarah Thompson", "David Rodriguez" ], "eli5": "This email provides important updates and discussions on various topics. It mentions the implementation of security and privacy measures, HR updates and employee benefits, marketing initiatives and campaigns, and research and development projects. It recognizes the contributions of John Doe, Jane Smith, Michael Johnson, Sarah Thompson, and David Rodriguez. It also reminds everyone to adhere to data protection policies, enroll in the employee benefits program, attend the upcoming product launch event, and share ideas for new projects during the R&D brainstorming session." }} ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:43.461Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties/", "description": "We can extract useful features of documents using the", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3476", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"doctran_extract_properties\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:43 GMT", "etag": "W/\"cc380bf9a64218c1ac7bffe1d97af646\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::6tf8x-1713753583394-478f3b6aa3fb" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties/", "property": "og:url" }, { "content": "Doctran: extract properties | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "We can extract useful features of documents using the", "property": "og:description" } ], "title": "Doctran: extract properties | 🦜️🔗 LangChain" }
Doctran: extract properties We can extract useful features of documents using the Doctran library, which uses OpenAI’s function calling feature to extract specific metadata. Extracting metadata from documents is helpful for a variety of tasks, including: Classification: classifying documents into different categories Data mining: Extract structured data that can be used for data analysis Style transfer: Change the way text is written to more closely match expected user input, improving vector search results %pip install --upgrade --quiet doctran import json from langchain_community.document_transformers import DoctranPropertyExtractor from langchain_core.documents import Document from dotenv import load_dotenv load_dotenv() Input​ This is the document we’ll extract properties from. sample_text = """[Generated with ChatGPT] Confidential Document - For Internal Use Only Date: July 1, 2023 Subject: Updates and Discussions on Various Topics Dear Team, I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential. Security and Privacy Measures As part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: john.doe@example.com) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at security@example.com. HR Updates and Employee Benefits Recently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: michael.johnson@example.com). Marketing Initiatives and Campaigns Our marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company. Research and Development Projects In our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: david.rodriguez@example.com) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th. Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly. Thank you for your attention, and let's continue to work together to achieve our goals. Best regards, Jason Fan Cofounder & CEO Psychic jason@psychic.dev """ print(sample_text) [Generated with ChatGPT] Confidential Document - For Internal Use Only Date: July 1, 2023 Subject: Updates and Discussions on Various Topics Dear Team, I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential. Security and Privacy Measures As part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: john.doe@example.com) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at security@example.com. HR Updates and Employee Benefits Recently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: michael.johnson@example.com). Marketing Initiatives and Campaigns Our marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company. Research and Development Projects In our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: david.rodriguez@example.com) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th. Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly. Thank you for your attention, and let's continue to work together to achieve our goals. Best regards, Jason Fan Cofounder & CEO Psychic jason@psychic.dev documents = [Document(page_content=sample_text)] properties = [ { "name": "category", "description": "What type of email this is.", "type": "string", "enum": ["update", "action_item", "customer_feedback", "announcement", "other"], "required": True, }, { "name": "mentions", "description": "A list of all people mentioned in this email.", "type": "array", "items": { "name": "full_name", "description": "The full name of the person mentioned.", "type": "string", }, "required": True, }, { "name": "eli5", "description": "Explain this email to me like I'm 5 years old.", "type": "string", "required": True, }, ] property_extractor = DoctranPropertyExtractor(properties=properties) Output​ After extracting properties from a document, the result will be returned as a new document with properties provided in the metadata extracted_document = property_extractor.transform_documents( documents, properties=properties ) print(json.dumps(extracted_document[0].metadata, indent=2)) { "extracted_properties": { "category": "update", "mentions": [ "John Doe", "Jane Smith", "Michael Johnson", "Sarah Thompson", "David Rodriguez" ], "eli5": "This email provides important updates and discussions on various topics. It mentions the implementation of security and privacy measures, HR updates and employee benefits, marketing initiatives and campaigns, and research and development projects. It recognizes the contributions of John Doe, Jane Smith, Michael Johnson, Sarah Thompson, and David Rodriguez. It also reminds everyone to adhere to data protection policies, enroll in the employee benefits program, attend the upcoming product launch event, and share ideas for new projects during the R&D brainstorming session." } }
https://python.langchain.com/docs/integrations/document_loaders/tencent_cos_directory/
## Tencent COS Directory > [Tencent Cloud Object Storage (COS)](https://www.tencentcloud.com/products/cos) is a distributed storage service that enables you to store any amount of data from anywhere via HTTP/HTTPS protocols. `COS` has no restrictions on data structure or format. It also has no bucket size limit and partition management, making it suitable for virtually any use case, such as data delivery, data processing, and data lakes. `COS` provides a web-based console, multi-language SDKs and APIs, command line tool, and graphical tools. It works well with Amazon S3 APIs, allowing you to quickly access community tools and plugins. This covers how to load document objects from a `Tencent COS Directory`. ``` %pip install --upgrade --quiet cos-python-sdk-v5 ``` ``` from langchain_community.document_loaders import TencentCOSDirectoryLoaderfrom qcloud_cos import CosConfig ``` ``` conf = CosConfig( Region="your cos region", SecretId="your cos secret_id", SecretKey="your cos secret_key",)loader = TencentCOSDirectoryLoader(conf=conf, bucket="you_cos_bucket") ``` ## Specifying a prefix[​](#specifying-a-prefix "Direct link to Specifying a prefix") You can also specify a prefix for more finegrained control over what files to load. ``` loader = TencentCOSDirectoryLoader(conf=conf, bucket="you_cos_bucket", prefix="fake") ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:43.764Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/tencent_cos_directory/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/tencent_cos_directory/", "description": "[Tencent Cloud Object Storage", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3478", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"tencent_cos_directory\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:43 GMT", "etag": "W/\"88c58bb9aad55779526ab5d0f41255b6\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::rgmpg-1713753583475-94707e4e402e" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/tencent_cos_directory/", "property": "og:url" }, { "content": "Tencent COS Directory | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "[Tencent Cloud Object Storage", "property": "og:description" } ], "title": "Tencent COS Directory | 🦜️🔗 LangChain" }
Tencent COS Directory Tencent Cloud Object Storage (COS) is a distributed storage service that enables you to store any amount of data from anywhere via HTTP/HTTPS protocols. COS has no restrictions on data structure or format. It also has no bucket size limit and partition management, making it suitable for virtually any use case, such as data delivery, data processing, and data lakes. COS provides a web-based console, multi-language SDKs and APIs, command line tool, and graphical tools. It works well with Amazon S3 APIs, allowing you to quickly access community tools and plugins. This covers how to load document objects from a Tencent COS Directory. %pip install --upgrade --quiet cos-python-sdk-v5 from langchain_community.document_loaders import TencentCOSDirectoryLoader from qcloud_cos import CosConfig conf = CosConfig( Region="your cos region", SecretId="your cos secret_id", SecretKey="your cos secret_key", ) loader = TencentCOSDirectoryLoader(conf=conf, bucket="you_cos_bucket") Specifying a prefix​ You can also specify a prefix for more finegrained control over what files to load. loader = TencentCOSDirectoryLoader(conf=conf, bucket="you_cos_bucket", prefix="fake") Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_transformers/cross_encoder_reranker/
## Cross Encoder Reranker This notebook shows how to implement reranker in a retriever with your own cross encoder from [Hugging Face cross encoder models](https://huggingface.co/cross-encoder) or Hugging Face models that implements cross encoder function ([example: BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base)). `SagemakerEndpointCrossEncoder` enables you to use these HuggingFace models loaded on Sagemaker. This builds on top of ideas in the [ContextualCompressionRetriever](https://python.langchain.com/docs/modules/data_connection/retrievers/contextual_compression/). Overall structure of this document came from [Cohere Reranker documentation](https://python.langchain.com/docs/integrations/retrievers/cohere-reranker/). For more about why cross encoder can be used as reranking mechanism in conjunction with embeddings for better retrieval, refer to [Hugging Face Cross-Encoders documentation](https://www.sbert.net/examples/applications/cross-encoder/README.html). ``` #!pip install faiss sentence_transformers# OR (depending on Python version)#!pip install faiss-cpu sentence_transformers ``` ``` # Helper function for printing docsdef pretty_print_docs(docs): print( f"\n{'-' * 100}\n".join( [f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)] ) ) ``` ## Set up the base vector store retriever[​](#set-up-the-base-vector-store-retriever "Direct link to Set up the base vector store retriever") Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs. ``` from langchain.document_loaders import TextLoaderfrom langchain_community.embeddings import HuggingFaceEmbeddingsfrom langchain_community.vectorstores import FAISSfrom langchain_text_splitters import RecursiveCharacterTextSplitterdocuments = TextLoader("../../modules/state_of_the_union.txt").load()text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)texts = text_splitter.split_documents(documents)embeddingsModel = HuggingFaceEmbeddings( model_name="sentence-transformers/msmarco-distilbert-dot-v5")retriever = FAISS.from_documents(texts, embeddingsModel).as_retriever( search_kwargs={"k": 20})query = "What is the plan for the economy?"docs = retriever.get_relevant_documents(query)pretty_print_docs(docs) ``` ## Doing reranking with CrossEncoderReranker[​](#doing-reranking-with-crossencoderreranker "Direct link to Doing reranking with CrossEncoderReranker") Now let’s wrap our base retriever with a `ContextualCompressionRetriever`. `CrossEncoderReranker` uses `HuggingFaceCrossEncoder` to rerank the returned results. ``` from langchain.retrievers import ContextualCompressionRetrieverfrom langchain.retrievers.document_compressors import CrossEncoderRerankerfrom langchain_community.cross_encoders import HuggingFaceCrossEncodermodel = HuggingFaceCrossEncoder(model_name="BAAI/bge-reranker-base")compressor = CrossEncoderReranker(model=model, top_n=3)compression_retriever = ContextualCompressionRetriever( base_compressor=compressor, base_retriever=retriever)compressed_docs = compression_retriever.get_relevant_documents( "What is the plan for the economy?")pretty_print_docs(compressed_docs) ``` ``` Document 1:More infrastructure and innovation in America. More goods moving faster and cheaper in America. More jobs where you can earn a good living in America. And instead of relying on foreign supply chains, let’s make it in America. Economists call it “increasing the productive capacity of our economy.” I call it building a better America. My plan to fight inflation will lower your costs and lower the deficit.----------------------------------------------------------------------------------------------------Document 2:Second – cut energy costs for families an average of $500 a year by combatting climate change. Let’s provide investments and tax credits to weatherize your homes and businesses to be energy efficient and you get a tax credit; double America’s clean energy production in solar, wind, and so much more; lower the price of electric vehicles, saving you another $80 a month because you’ll never have to pay at the gas pump again.----------------------------------------------------------------------------------------------------Document 3:Look at cars. Last year, there weren’t enough semiconductors to make all the cars that people wanted to buy. And guess what, prices of automobiles went up. So—we have a choice. One way to fight inflation is to drive down wages and make Americans poorer. I have a better plan to fight inflation. Lower your costs, not your wages. Make more cars and semiconductors in America. More infrastructure and innovation in America. More goods moving faster and cheaper in America. ``` ## Uploading Hugging Face model to SageMaker endpoint[​](#uploading-hugging-face-model-to-sagemaker-endpoint "Direct link to Uploading Hugging Face model to SageMaker endpoint") Here is a sample `inference.py` for creating an endpoint that works with `SagemakerEndpointCrossEncoder`. For more details with step-by-step guidance, refer to [this article](https://huggingface.co/blog/kchoe/deploy-any-huggingface-model-to-sagemaker). It downloads Hugging Face model on the fly, so you do not need to keep the model artifacts such as `pytorch_model.bin` in your `model.tar.gz`. ``` import jsonimport loggingfrom typing import Listimport torchfrom sagemaker_inference import encoderfrom transformers import AutoModelForSequenceClassification, AutoTokenizerPAIRS = "pairs"SCORES = "scores"class CrossEncoder: def __init__(self) -> None: self.device = ( torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") ) logging.info(f"Using device: {self.device}") model_name = "BAAI/bge-reranker-base" self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForSequenceClassification.from_pretrained(model_name) self.model = self.model.to(self.device) def __call__(self, pairs: List[List[str]]) -> List[float]: with torch.inference_mode(): inputs = self.tokenizer( pairs, padding=True, truncation=True, return_tensors="pt", max_length=512, ) inputs = inputs.to(self.device) scores = ( self.model(**inputs, return_dict=True) .logits.view( -1, ) .float() ) return scores.detach().cpu().tolist()def model_fn(model_dir: str) -> CrossEncoder: try: return CrossEncoder() except Exception: logging.exception(f"Failed to load model from: {model_dir}") raisedef transform_fn( cross_encoder: CrossEncoder, input_data: bytes, content_type: str, accept: str) -> bytes: payload = json.loads(input_data) model_output = cross_encoder(**payload) output = {SCORES: model_output} return encoder.encode(output, accept) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:43.959Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_transformers/cross_encoder_reranker/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_transformers/cross_encoder_reranker/", "description": "This notebook shows how to implement reranker in a retriever with your", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4407", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"cross_encoder_reranker\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:43 GMT", "etag": "W/\"93c334739c3bfbd1e4196f4df74951e2\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::rcjd5-1713753583738-11e323a67c97" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_transformers/cross_encoder_reranker/", "property": "og:url" }, { "content": "Cross Encoder Reranker | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebook shows how to implement reranker in a retriever with your", "property": "og:description" } ], "title": "Cross Encoder Reranker | 🦜️🔗 LangChain" }
Cross Encoder Reranker This notebook shows how to implement reranker in a retriever with your own cross encoder from Hugging Face cross encoder models or Hugging Face models that implements cross encoder function (example: BAAI/bge-reranker-base). SagemakerEndpointCrossEncoder enables you to use these HuggingFace models loaded on Sagemaker. This builds on top of ideas in the ContextualCompressionRetriever. Overall structure of this document came from Cohere Reranker documentation. For more about why cross encoder can be used as reranking mechanism in conjunction with embeddings for better retrieval, refer to Hugging Face Cross-Encoders documentation. #!pip install faiss sentence_transformers # OR (depending on Python version) #!pip install faiss-cpu sentence_transformers # Helper function for printing docs def pretty_print_docs(docs): print( f"\n{'-' * 100}\n".join( [f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)] ) ) Set up the base vector store retriever​ Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs. from langchain.document_loaders import TextLoader from langchain_community.embeddings import HuggingFaceEmbeddings from langchain_community.vectorstores import FAISS from langchain_text_splitters import RecursiveCharacterTextSplitter documents = TextLoader("../../modules/state_of_the_union.txt").load() text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100) texts = text_splitter.split_documents(documents) embeddingsModel = HuggingFaceEmbeddings( model_name="sentence-transformers/msmarco-distilbert-dot-v5" ) retriever = FAISS.from_documents(texts, embeddingsModel).as_retriever( search_kwargs={"k": 20} ) query = "What is the plan for the economy?" docs = retriever.get_relevant_documents(query) pretty_print_docs(docs) Doing reranking with CrossEncoderReranker​ Now let’s wrap our base retriever with a ContextualCompressionRetriever. CrossEncoderReranker uses HuggingFaceCrossEncoder to rerank the returned results. from langchain.retrievers import ContextualCompressionRetriever from langchain.retrievers.document_compressors import CrossEncoderReranker from langchain_community.cross_encoders import HuggingFaceCrossEncoder model = HuggingFaceCrossEncoder(model_name="BAAI/bge-reranker-base") compressor = CrossEncoderReranker(model=model, top_n=3) compression_retriever = ContextualCompressionRetriever( base_compressor=compressor, base_retriever=retriever ) compressed_docs = compression_retriever.get_relevant_documents( "What is the plan for the economy?" ) pretty_print_docs(compressed_docs) Document 1: More infrastructure and innovation in America. More goods moving faster and cheaper in America. More jobs where you can earn a good living in America. And instead of relying on foreign supply chains, let’s make it in America. Economists call it “increasing the productive capacity of our economy.” I call it building a better America. My plan to fight inflation will lower your costs and lower the deficit. ---------------------------------------------------------------------------------------------------- Document 2: Second – cut energy costs for families an average of $500 a year by combatting climate change. Let’s provide investments and tax credits to weatherize your homes and businesses to be energy efficient and you get a tax credit; double America’s clean energy production in solar, wind, and so much more; lower the price of electric vehicles, saving you another $80 a month because you’ll never have to pay at the gas pump again. ---------------------------------------------------------------------------------------------------- Document 3: Look at cars. Last year, there weren’t enough semiconductors to make all the cars that people wanted to buy. And guess what, prices of automobiles went up. So—we have a choice. One way to fight inflation is to drive down wages and make Americans poorer. I have a better plan to fight inflation. Lower your costs, not your wages. Make more cars and semiconductors in America. More infrastructure and innovation in America. More goods moving faster and cheaper in America. Uploading Hugging Face model to SageMaker endpoint​ Here is a sample inference.py for creating an endpoint that works with SagemakerEndpointCrossEncoder. For more details with step-by-step guidance, refer to this article. It downloads Hugging Face model on the fly, so you do not need to keep the model artifacts such as pytorch_model.bin in your model.tar.gz. import json import logging from typing import List import torch from sagemaker_inference import encoder from transformers import AutoModelForSequenceClassification, AutoTokenizer PAIRS = "pairs" SCORES = "scores" class CrossEncoder: def __init__(self) -> None: self.device = ( torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") ) logging.info(f"Using device: {self.device}") model_name = "BAAI/bge-reranker-base" self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForSequenceClassification.from_pretrained(model_name) self.model = self.model.to(self.device) def __call__(self, pairs: List[List[str]]) -> List[float]: with torch.inference_mode(): inputs = self.tokenizer( pairs, padding=True, truncation=True, return_tensors="pt", max_length=512, ) inputs = inputs.to(self.device) scores = ( self.model(**inputs, return_dict=True) .logits.view( -1, ) .float() ) return scores.detach().cpu().tolist() def model_fn(model_dir: str) -> CrossEncoder: try: return CrossEncoder() except Exception: logging.exception(f"Failed to load model from: {model_dir}") raise def transform_fn( cross_encoder: CrossEncoder, input_data: bytes, content_type: str, accept: str ) -> bytes: payload = json.loads(input_data) model_output = cross_encoder(**payload) output = {SCORES: model_output} return encoder.encode(output, accept)
https://python.langchain.com/docs/integrations/document_loaders/twitter/
This loader fetches the text from the Tweets of a list of `Twitter` users, using the `tweepy` Python package. You must initialize the loader with your `Twitter API` token, and you need to pass in the Twitter username you want to extract. ``` [Document(page_content='@MrAndyNgo @REI One store after another shutting down', metadata={'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@KanekoaTheGreat @joshrogin @glennbeck Large ships are fundamentally vulnerable to ballistic (hypersonic) missiles', metadata={'created_at': 'Tue Apr 18 03:43:25 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@KanekoaTheGreat The Golden Rule', metadata={'created_at': 'Tue Apr 18 03:37:17 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@KanekoaTheGreat 🧐', metadata={'created_at': 'Tue Apr 18 03:35:48 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@TRHLofficial What’s he talking about and why is it sponsored by Erik’s son?', metadata={'created_at': 'Tue Apr 18 03:32:17 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}})] ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:44.278Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/twitter/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/twitter/", "description": "Twitter is an online social media and social", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4405", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"twitter\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:43 GMT", "etag": "W/\"e4071607290791d803ca7f5811438ced\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::ncfnt-1713753583950-9386ac811db0" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/twitter/", "property": "og:url" }, { "content": "Twitter | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Twitter is an online social media and social", "property": "og:description" } ], "title": "Twitter | 🦜️🔗 LangChain" }
This loader fetches the text from the Tweets of a list of Twitter users, using the tweepy Python package. You must initialize the loader with your Twitter API token, and you need to pass in the Twitter username you want to extract. [Document(page_content='@MrAndyNgo @REI One store after another shutting down', metadata={'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@KanekoaTheGreat @joshrogin @glennbeck Large ships are fundamentally vulnerable to ballistic (hypersonic) missiles', metadata={'created_at': 'Tue Apr 18 03:43:25 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@KanekoaTheGreat The Golden Rule', metadata={'created_at': 'Tue Apr 18 03:37:17 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@KanekoaTheGreat 🧐', metadata={'created_at': 'Tue Apr 18 03:35:48 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@TRHLofficial What’s he talking about and why is it sponsored by Erik’s son?', metadata={'created_at': 'Tue Apr 18 03:32:17 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}})]
https://python.langchain.com/docs/integrations/document_loaders/url/
## URL This example covers how to load `HTML` documents from a list of `URLs` into the `Document` format that we can use downstream. ## Unstructured URL Loader[​](#unstructured-url-loader "Direct link to Unstructured URL Loader") You have to install the `unstructured` library: ``` !pip install -U unstructured ``` ``` from langchain_community.document_loaders import UnstructuredURLLoader ``` ``` urls = [ "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-8-2023", "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-9-2023",] ``` Pass in ssl\_verify=False with headers=headers to get past ssl\_verification error. ``` loader = UnstructuredURLLoader(urls=urls) ``` ## Selenium URL Loader[​](#selenium-url-loader "Direct link to Selenium URL Loader") This covers how to load HTML documents from a list of URLs using the `SeleniumURLLoader`. Using `Selenium` allows us to load pages that require JavaScript to render. To use the `SeleniumURLLoader`, you have to install `selenium` and `unstructured`. ``` !pip install -U selenium unstructured ``` ``` from langchain_community.document_loaders import SeleniumURLLoader ``` ``` urls = [ "https://www.youtube.com/watch?v=dQw4w9WgXcQ", "https://goo.gl/maps/NDSHwePEyaHMFGwh8",] ``` ``` loader = SeleniumURLLoader(urls=urls) ``` ## Playwright URL Loader[​](#playwright-url-loader "Direct link to Playwright URL Loader") This covers how to load HTML documents from a list of URLs using the `PlaywrightURLLoader`. [Playwright](https://playwright.dev/) enables reliable end-to-end testing for modern web apps. As in the Selenium case, `Playwright` allows us to load and render the JavaScript pages. To use the `PlaywrightURLLoader`, you have to install `playwright` and `unstructured`. Additionally, you have to install the `Playwright Chromium` browser: ``` !pip install -U playwright unstructured ``` ``` from langchain_community.document_loaders import PlaywrightURLLoader ``` ``` urls = [ "https://www.youtube.com/watch?v=dQw4w9WgXcQ", "https://goo.gl/maps/NDSHwePEyaHMFGwh8",] ``` ``` loader = PlaywrightURLLoader(urls=urls, remove_selectors=["header", "footer"]) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:44.835Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/url/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/url/", "description": "This example covers how to load HTML documents from a list of URLs", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3479", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"url\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:44 GMT", "etag": "W/\"895bbccbf62e731f5f2df62b76f15cd7\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::wf55v-1713753584776-00078412d80a" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/url/", "property": "og:url" }, { "content": "URL | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This example covers how to load HTML documents from a list of URLs", "property": "og:description" } ], "title": "URL | 🦜️🔗 LangChain" }
URL This example covers how to load HTML documents from a list of URLs into the Document format that we can use downstream. Unstructured URL Loader​ You have to install the unstructured library: !pip install -U unstructured from langchain_community.document_loaders import UnstructuredURLLoader urls = [ "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-8-2023", "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-9-2023", ] Pass in ssl_verify=False with headers=headers to get past ssl_verification error. loader = UnstructuredURLLoader(urls=urls) Selenium URL Loader​ This covers how to load HTML documents from a list of URLs using the SeleniumURLLoader. Using Selenium allows us to load pages that require JavaScript to render. To use the SeleniumURLLoader, you have to install selenium and unstructured. !pip install -U selenium unstructured from langchain_community.document_loaders import SeleniumURLLoader urls = [ "https://www.youtube.com/watch?v=dQw4w9WgXcQ", "https://goo.gl/maps/NDSHwePEyaHMFGwh8", ] loader = SeleniumURLLoader(urls=urls) Playwright URL Loader​ This covers how to load HTML documents from a list of URLs using the PlaywrightURLLoader. Playwright enables reliable end-to-end testing for modern web apps. As in the Selenium case, Playwright allows us to load and render the JavaScript pages. To use the PlaywrightURLLoader, you have to install playwright and unstructured. Additionally, you have to install the Playwright Chromium browser: !pip install -U playwright unstructured from langchain_community.document_loaders import PlaywrightURLLoader urls = [ "https://www.youtube.com/watch?v=dQw4w9WgXcQ", "https://goo.gl/maps/NDSHwePEyaHMFGwh8", ] loader = PlaywrightURLLoader(urls=urls, remove_selectors=["header", "footer"]) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document/
Comparing documents through embeddings has the benefit of working across multiple languages. “Harrison says hello” and “Harrison dice hola” will occupy similar positions in the vector space because they have the same meaning semantically. However, it can still be useful to use an LLM to **translate documents into other languages** before vectorizing them. This is especially helpful when users are expected to query the knowledge base in different languages, or when state-of-the-art embedding models are not available for a given language. We can accomplish this using the [Doctran](https://github.com/psychic-api/doctran) library, which uses OpenAI’s function calling feature to translate documents between languages. ``` sample_text = """[Generated with ChatGPT]Confidential Document - For Internal Use OnlyDate: July 1, 2023Subject: Updates and Discussions on Various TopicsDear Team,I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.Security and Privacy MeasuresAs part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: john.doe@example.com) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at security@example.com.HR Updates and Employee BenefitsRecently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: michael.johnson@example.com).Marketing Initiatives and CampaignsOur marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.Research and Development ProjectsIn our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: david.rodriguez@example.com) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.Thank you for your attention, and let's continue to work together to achieve our goals.Best regards,Jason FanCofounder & CEOPsychicjason@psychic.dev""" ``` After translating a document, the result will be returned as a new document with the page\_content translated into the target language ``` Documento Confidencial - Solo para Uso InternoFecha: 1 de Julio de 2023Asunto: Actualizaciones y Discusiones sobre Varios TemasEstimado Equipo,Espero que este correo electrónico les encuentre bien. En este documento, me gustaría proporcionarles algunas actualizaciones importantes y discutir varios temas que requieren nuestra atención. Por favor, traten la información contenida aquí como altamente confidencial.Medidas de Seguridad y PrivacidadComo parte de nuestro compromiso continuo de garantizar la seguridad y privacidad de los datos de nuestros clientes, hemos implementado medidas sólidas en todos nuestros sistemas. Nos gustaría elogiar a John Doe (correo electrónico: john.doe@example.com) del departamento de TI por su diligente trabajo en mejorar nuestra seguridad de red. En el futuro, recordamos amablemente a todos que se adhieran estrictamente a nuestras políticas y pautas de protección de datos. Además, si encuentran algún riesgo o incidente de seguridad potencial, por favor, repórtelo de inmediato a nuestro equipo dedicado en security@example.com.Actualizaciones de Recursos Humanos y Beneficios para EmpleadosRecientemente, dimos la bienvenida a varios nuevos miembros del equipo que han realizado contribuciones significativas en sus respectivos departamentos. Me gustaría reconocer a Jane Smith (SSN: 049-45-5928) por su destacado desempeño en servicio al cliente. Jane ha recibido consistentemente comentarios positivos de nuestros clientes. Además, recuerden que el período de inscripción abierta para nuestro programa de beneficios para empleados se acerca rápidamente. Si tienen alguna pregunta o necesitan ayuda, por favor, contacten a nuestro representante de Recursos Humanos, Michael Johnson (teléfono: 418-492-3850, correo electrónico: michael.johnson@example.com).Iniciativas y Campañas de MarketingNuestro equipo de marketing ha estado trabajando activamente en el desarrollo de nuevas estrategias para aumentar el conocimiento de nuestra marca y fomentar la participación de los clientes. Nos gustaría agradecer a Sarah Thompson (teléfono: 415-555-1234) por sus esfuerzos excepcionales en la gestión de nuestras plataformas de redes sociales. Sarah ha logrado aumentar nuestra base de seguidores en un 20% solo en el último mes. Además, marquen sus calendarios para el próximo evento de lanzamiento de productos el 15 de Julio. Animamos a todos los miembros del equipo a asistir y apoyar este emocionante hito para nuestra empresa.Proyectos de Investigación y DesarrolloEn nuestra búsqueda de la innovación, nuestro departamento de investigación y desarrollo ha estado trabajando incansablemente en varios proyectos. Me gustaría reconocer el trabajo excepcional de David Rodriguez (correo electrónico: david.rodriguez@example.com) en su papel de líder de proyecto. Las contribuciones de David al desarrollo de nuestra tecnología de vanguardia han sido fundamentales. Además, nos gustaría recordar a todos que compartan sus ideas y sugerencias para posibles nuevos proyectos durante nuestra sesión mensual de lluvia de ideas de I+D, programada para el 10 de Julio.Por favor, traten la información de este documento con la máxima confidencialidad y asegúrense de no compartirla con personas no autorizadas. Si tienen alguna pregunta o inquietud sobre los temas discutidos, por favor, no duden en comunicarse directamente conmigo.Gracias por su atención y sigamos trabajando juntos para alcanzar nuestros objetivos.Atentamente,Jason FanCofundador y CEOPsychicjason@psychic.dev ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:44.999Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document/", "description": "Comparing documents through embeddings has the benefit of working across", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3477", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"doctran_translate_document\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:44 GMT", "etag": "W/\"828cb60397b6c837199ed6dd72fe38a3\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::vtglz-1713753584794-40dd911c46db" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document/", "property": "og:url" }, { "content": "Doctran: language translation | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Comparing documents through embeddings has the benefit of working across", "property": "og:description" } ], "title": "Doctran: language translation | 🦜️🔗 LangChain" }
Comparing documents through embeddings has the benefit of working across multiple languages. “Harrison says hello” and “Harrison dice hola” will occupy similar positions in the vector space because they have the same meaning semantically. However, it can still be useful to use an LLM to translate documents into other languages before vectorizing them. This is especially helpful when users are expected to query the knowledge base in different languages, or when state-of-the-art embedding models are not available for a given language. We can accomplish this using the Doctran library, which uses OpenAI’s function calling feature to translate documents between languages. sample_text = """[Generated with ChatGPT] Confidential Document - For Internal Use Only Date: July 1, 2023 Subject: Updates and Discussions on Various Topics Dear Team, I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential. Security and Privacy Measures As part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: john.doe@example.com) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at security@example.com. HR Updates and Employee Benefits Recently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: michael.johnson@example.com). Marketing Initiatives and Campaigns Our marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company. Research and Development Projects In our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: david.rodriguez@example.com) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th. Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly. Thank you for your attention, and let's continue to work together to achieve our goals. Best regards, Jason Fan Cofounder & CEO Psychic jason@psychic.dev """ After translating a document, the result will be returned as a new document with the page_content translated into the target language Documento Confidencial - Solo para Uso Interno Fecha: 1 de Julio de 2023 Asunto: Actualizaciones y Discusiones sobre Varios Temas Estimado Equipo, Espero que este correo electrónico les encuentre bien. En este documento, me gustaría proporcionarles algunas actualizaciones importantes y discutir varios temas que requieren nuestra atención. Por favor, traten la información contenida aquí como altamente confidencial. Medidas de Seguridad y Privacidad Como parte de nuestro compromiso continuo de garantizar la seguridad y privacidad de los datos de nuestros clientes, hemos implementado medidas sólidas en todos nuestros sistemas. Nos gustaría elogiar a John Doe (correo electrónico: john.doe@example.com) del departamento de TI por su diligente trabajo en mejorar nuestra seguridad de red. En el futuro, recordamos amablemente a todos que se adhieran estrictamente a nuestras políticas y pautas de protección de datos. Además, si encuentran algún riesgo o incidente de seguridad potencial, por favor, repórtelo de inmediato a nuestro equipo dedicado en security@example.com. Actualizaciones de Recursos Humanos y Beneficios para Empleados Recientemente, dimos la bienvenida a varios nuevos miembros del equipo que han realizado contribuciones significativas en sus respectivos departamentos. Me gustaría reconocer a Jane Smith (SSN: 049-45-5928) por su destacado desempeño en servicio al cliente. Jane ha recibido consistentemente comentarios positivos de nuestros clientes. Además, recuerden que el período de inscripción abierta para nuestro programa de beneficios para empleados se acerca rápidamente. Si tienen alguna pregunta o necesitan ayuda, por favor, contacten a nuestro representante de Recursos Humanos, Michael Johnson (teléfono: 418-492-3850, correo electrónico: michael.johnson@example.com). Iniciativas y Campañas de Marketing Nuestro equipo de marketing ha estado trabajando activamente en el desarrollo de nuevas estrategias para aumentar el conocimiento de nuestra marca y fomentar la participación de los clientes. Nos gustaría agradecer a Sarah Thompson (teléfono: 415-555-1234) por sus esfuerzos excepcionales en la gestión de nuestras plataformas de redes sociales. Sarah ha logrado aumentar nuestra base de seguidores en un 20% solo en el último mes. Además, marquen sus calendarios para el próximo evento de lanzamiento de productos el 15 de Julio. Animamos a todos los miembros del equipo a asistir y apoyar este emocionante hito para nuestra empresa. Proyectos de Investigación y Desarrollo En nuestra búsqueda de la innovación, nuestro departamento de investigación y desarrollo ha estado trabajando incansablemente en varios proyectos. Me gustaría reconocer el trabajo excepcional de David Rodriguez (correo electrónico: david.rodriguez@example.com) en su papel de líder de proyecto. Las contribuciones de David al desarrollo de nuestra tecnología de vanguardia han sido fundamentales. Además, nos gustaría recordar a todos que compartan sus ideas y sugerencias para posibles nuevos proyectos durante nuestra sesión mensual de lluvia de ideas de I+D, programada para el 10 de Julio. Por favor, traten la información de este documento con la máxima confidencialidad y asegúrense de no compartirla con personas no autorizadas. Si tienen alguna pregunta o inquietud sobre los temas discutidos, por favor, no duden en comunicarse directamente conmigo. Gracias por su atención y sigamos trabajando juntos para alcanzar nuestros objetivos. Atentamente, Jason Fan Cofundador y CEO Psychic jason@psychic.dev
https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document/
## Doctran: interrogate documents Documents used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we **convert documents into Q&A format** before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents. We can accomplish this using the [Doctran](https://github.com/psychic-api/doctran) library, which uses OpenAI’s function calling feature to “interrogate” documents. See [this notebook](https://github.com/psychic-api/doctran/blob/main/benchmark.ipynb) for benchmarks on vector similarity scores for various queries based on raw documents versus interrogated documents. ``` %pip install --upgrade --quiet doctran ``` ``` import jsonfrom langchain_community.document_transformers import DoctranQATransformerfrom langchain_core.documents import Document ``` ``` from dotenv import load_dotenvload_dotenv() ``` ## Input[​](#input "Direct link to Input") This is the document we’ll interrogate ``` sample_text = """[Generated with ChatGPT]Confidential Document - For Internal Use OnlyDate: July 1, 2023Subject: Updates and Discussions on Various TopicsDear Team,I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.Security and Privacy MeasuresAs part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: john.doe@example.com) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at security@example.com.HR Updates and Employee BenefitsRecently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: michael.johnson@example.com).Marketing Initiatives and CampaignsOur marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.Research and Development ProjectsIn our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: david.rodriguez@example.com) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.Thank you for your attention, and let's continue to work together to achieve our goals.Best regards,Jason FanCofounder & CEOPsychicjason@psychic.dev"""print(sample_text) ``` ``` [Generated with ChatGPT]Confidential Document - For Internal Use OnlyDate: July 1, 2023Subject: Updates and Discussions on Various TopicsDear Team,I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.Security and Privacy MeasuresAs part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: john.doe@example.com) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at security@example.com.HR Updates and Employee BenefitsRecently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: michael.johnson@example.com).Marketing Initiatives and CampaignsOur marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.Research and Development ProjectsIn our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: david.rodriguez@example.com) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.Thank you for your attention, and let's continue to work together to achieve our goals.Best regards,Jason FanCofounder & CEOPsychicjason@psychic.dev ``` ``` documents = [Document(page_content=sample_text)]qa_transformer = DoctranQATransformer()transformed_document = qa_transformer.transform_documents(documents) ``` ## Output[​](#output "Direct link to Output") After interrogating a document, the result will be returned as a new document with questions and answers provided in the metadata. ``` transformed_document = qa_transformer.transform_documents(documents)print(json.dumps(transformed_document[0].metadata, indent=2)) ``` ``` { "questions_and_answers": [ { "question": "What is the purpose of this document?", "answer": "The purpose of this document is to provide important updates and discuss various topics that require the team's attention." }, { "question": "What should be done if someone comes across potential security risks or incidents?", "answer": "If someone comes across potential security risks or incidents, they should report them immediately to the dedicated team at security@example.com." }, { "question": "Who is commended for enhancing network security?", "answer": "John Doe from the IT department is commended for enhancing network security." }, { "question": "Who should be contacted for assistance with employee benefits?", "answer": "For assistance with employee benefits, HR representative Michael Johnson should be contacted. His phone number is 418-492-3850, and his email is michael.johnson@example.com." }, { "question": "Who has made significant contributions to their respective departments?", "answer": "Several new team members have made significant contributions to their respective departments." }, { "question": "Who is recognized for outstanding performance in customer service?", "answer": "Jane Smith is recognized for outstanding performance in customer service." }, { "question": "Who has successfully increased the follower base on social media?", "answer": "Sarah Thompson has successfully increased the follower base on social media." }, { "question": "When is the upcoming product launch event?", "answer": "The upcoming product launch event is on July 15th." }, { "question": "Who is acknowledged for their exceptional work as project lead?", "answer": "David Rodriguez is acknowledged for his exceptional work as project lead." }, { "question": "When is the monthly R&D brainstorming session scheduled?", "answer": "The monthly R&D brainstorming session is scheduled for July 10th." } ]} ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:45.280Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document/", "description": "Documents used in a vector store knowledge base are typically stored in", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4406", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"doctran_interrogate_document\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:45 GMT", "etag": "W/\"4849e6fdc030cb58d7aca69e4b025766\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::klsh9-1713753585015-791003af35e6" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document/", "property": "og:url" }, { "content": "Doctran: interrogate documents | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Documents used in a vector store knowledge base are typically stored in", "property": "og:description" } ], "title": "Doctran: interrogate documents | 🦜️🔗 LangChain" }
Doctran: interrogate documents Documents used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents. We can accomplish this using the Doctran library, which uses OpenAI’s function calling feature to “interrogate” documents. See this notebook for benchmarks on vector similarity scores for various queries based on raw documents versus interrogated documents. %pip install --upgrade --quiet doctran import json from langchain_community.document_transformers import DoctranQATransformer from langchain_core.documents import Document from dotenv import load_dotenv load_dotenv() Input​ This is the document we’ll interrogate sample_text = """[Generated with ChatGPT] Confidential Document - For Internal Use Only Date: July 1, 2023 Subject: Updates and Discussions on Various Topics Dear Team, I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential. Security and Privacy Measures As part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: john.doe@example.com) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at security@example.com. HR Updates and Employee Benefits Recently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: michael.johnson@example.com). Marketing Initiatives and Campaigns Our marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company. Research and Development Projects In our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: david.rodriguez@example.com) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th. Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly. Thank you for your attention, and let's continue to work together to achieve our goals. Best regards, Jason Fan Cofounder & CEO Psychic jason@psychic.dev """ print(sample_text) [Generated with ChatGPT] Confidential Document - For Internal Use Only Date: July 1, 2023 Subject: Updates and Discussions on Various Topics Dear Team, I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential. Security and Privacy Measures As part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: john.doe@example.com) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at security@example.com. HR Updates and Employee Benefits Recently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: michael.johnson@example.com). Marketing Initiatives and Campaigns Our marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company. Research and Development Projects In our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: david.rodriguez@example.com) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th. Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly. Thank you for your attention, and let's continue to work together to achieve our goals. Best regards, Jason Fan Cofounder & CEO Psychic jason@psychic.dev documents = [Document(page_content=sample_text)] qa_transformer = DoctranQATransformer() transformed_document = qa_transformer.transform_documents(documents) Output​ After interrogating a document, the result will be returned as a new document with questions and answers provided in the metadata. transformed_document = qa_transformer.transform_documents(documents) print(json.dumps(transformed_document[0].metadata, indent=2)) { "questions_and_answers": [ { "question": "What is the purpose of this document?", "answer": "The purpose of this document is to provide important updates and discuss various topics that require the team's attention." }, { "question": "What should be done if someone comes across potential security risks or incidents?", "answer": "If someone comes across potential security risks or incidents, they should report them immediately to the dedicated team at security@example.com." }, { "question": "Who is commended for enhancing network security?", "answer": "John Doe from the IT department is commended for enhancing network security." }, { "question": "Who should be contacted for assistance with employee benefits?", "answer": "For assistance with employee benefits, HR representative Michael Johnson should be contacted. His phone number is 418-492-3850, and his email is michael.johnson@example.com." }, { "question": "Who has made significant contributions to their respective departments?", "answer": "Several new team members have made significant contributions to their respective departments." }, { "question": "Who is recognized for outstanding performance in customer service?", "answer": "Jane Smith is recognized for outstanding performance in customer service." }, { "question": "Who has successfully increased the follower base on social media?", "answer": "Sarah Thompson has successfully increased the follower base on social media." }, { "question": "When is the upcoming product launch event?", "answer": "The upcoming product launch event is on July 15th." }, { "question": "Who is acknowledged for their exceptional work as project lead?", "answer": "David Rodriguez is acknowledged for his exceptional work as project lead." }, { "question": "When is the monthly R&D brainstorming session scheduled?", "answer": "The monthly R&D brainstorming session is scheduled for July 10th." } ] }
https://python.langchain.com/docs/integrations/document_transformers/google_docai/
## Google Cloud Document AI Document AI is a document understanding platform from Google Cloud to transform unstructured data from documents into structured data, making it easier to understand, analyze, and consume. Learn more: * [Document AI overview](https://cloud.google.com/document-ai/docs/overview) * [Document AI videos and labs](https://cloud.google.com/document-ai/docs/videos) * [Try it!](https://cloud.google.com/document-ai/docs/drag-and-drop) The module contains a `PDF` parser based on DocAI from Google Cloud. You need to install two libraries to use this parser: ``` %pip install --upgrade --quiet google-cloud-documentai%pip install --upgrade --quiet google-cloud-documentai-toolbox ``` First, you need to set up a Google Cloud Storage (GCS) bucket and create your own Optical Character Recognition (OCR) processor as described here: [https://cloud.google.com/document-ai/docs/create-processor](https://cloud.google.com/document-ai/docs/create-processor) The `GCS_OUTPUT_PATH` should be a path to a folder on GCS (starting with `gs://`) and a `PROCESSOR_NAME` should look like `projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID` or `projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID/processorVersions/PROCESSOR_VERSION_ID`. You can get it either programmatically or copy from the `Prediction endpoint` section of the `Processor details` tab in the Google Cloud Console. ``` GCS_OUTPUT_PATH = "gs://BUCKET_NAME/FOLDER_PATH"PROCESSOR_NAME = "projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID" ``` ``` from langchain_community.document_loaders.blob_loaders import Blobfrom langchain_community.document_loaders.parsers import DocAIParser ``` Now, create a `DocAIParser`. ``` parser = DocAIParser( location="us", processor_name=PROCESSOR_NAME, gcs_output_path=GCS_OUTPUT_PATH) ``` For this example, you can use an Alphabet earnings report that’s uploaded to a public GCS bucket. [2022Q1\_alphabet\_earnings\_release.pdf](https://storage.googleapis.com/cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs/2022Q1_alphabet_earnings_release.pdf) Pass the document to the `lazy_parse()` method to ``` blob = Blob( path="gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs/2022Q1_alphabet_earnings_release.pdf") ``` We’ll get one document per page, 11 in total: ``` docs = list(parser.lazy_parse(blob))print(len(docs)) ``` You can run end-to-end parsing of a blob one-by-one. If you have many documents, it might be a better approach to batch them together and maybe even detach parsing from handling the results of parsing. ``` operations = parser.docai_parse([blob])print([op.operation.name for op in operations]) ``` ``` ['projects/543079149601/locations/us/operations/16447136779727347991'] ``` You can check whether operations are finished: ``` parser.is_running(operations) ``` And when they’re finished, you can parse the results: ``` parser.is_running(operations) ``` ``` results = parser.get_results(operations)print(results[0]) ``` ``` DocAIParsingResults(source_path='gs://vertex-pgt/examples/goog-exhibit-99-1-q1-2023-19.pdf', parsed_path='gs://vertex-pgt/test/run1/16447136779727347991/0') ``` And now we can finally generate Documents from parsed results: ``` docs = list(parser.parse_from_results(results)) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:45.477Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_transformers/google_docai/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_transformers/google_docai/", "description": "Document AI is a document understanding platform from Google Cloud to", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3477", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_docai\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:45 GMT", "etag": "W/\"07edbc208989cee3ba77be1f7f122741\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::tql9z-1713753585163-8bf0459fa5e2" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_transformers/google_docai/", "property": "og:url" }, { "content": "Google Cloud Document AI | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Document AI is a document understanding platform from Google Cloud to", "property": "og:description" } ], "title": "Google Cloud Document AI | 🦜️🔗 LangChain" }
Google Cloud Document AI Document AI is a document understanding platform from Google Cloud to transform unstructured data from documents into structured data, making it easier to understand, analyze, and consume. Learn more: Document AI overview Document AI videos and labs Try it! The module contains a PDF parser based on DocAI from Google Cloud. You need to install two libraries to use this parser: %pip install --upgrade --quiet google-cloud-documentai %pip install --upgrade --quiet google-cloud-documentai-toolbox First, you need to set up a Google Cloud Storage (GCS) bucket and create your own Optical Character Recognition (OCR) processor as described here: https://cloud.google.com/document-ai/docs/create-processor The GCS_OUTPUT_PATH should be a path to a folder on GCS (starting with gs://) and a PROCESSOR_NAME should look like projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID or projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID/processorVersions/PROCESSOR_VERSION_ID. You can get it either programmatically or copy from the Prediction endpoint section of the Processor details tab in the Google Cloud Console. GCS_OUTPUT_PATH = "gs://BUCKET_NAME/FOLDER_PATH" PROCESSOR_NAME = "projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID" from langchain_community.document_loaders.blob_loaders import Blob from langchain_community.document_loaders.parsers import DocAIParser Now, create a DocAIParser. parser = DocAIParser( location="us", processor_name=PROCESSOR_NAME, gcs_output_path=GCS_OUTPUT_PATH ) For this example, you can use an Alphabet earnings report that’s uploaded to a public GCS bucket. 2022Q1_alphabet_earnings_release.pdf Pass the document to the lazy_parse() method to blob = Blob( path="gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs/2022Q1_alphabet_earnings_release.pdf" ) We’ll get one document per page, 11 in total: docs = list(parser.lazy_parse(blob)) print(len(docs)) You can run end-to-end parsing of a blob one-by-one. If you have many documents, it might be a better approach to batch them together and maybe even detach parsing from handling the results of parsing. operations = parser.docai_parse([blob]) print([op.operation.name for op in operations]) ['projects/543079149601/locations/us/operations/16447136779727347991'] You can check whether operations are finished: parser.is_running(operations) And when they’re finished, you can parse the results: parser.is_running(operations) results = parser.get_results(operations) print(results[0]) DocAIParsingResults(source_path='gs://vertex-pgt/examples/goog-exhibit-99-1-q1-2023-19.pdf', parsed_path='gs://vertex-pgt/test/run1/16447136779727347991/0') And now we can finally generate Documents from parsed results: docs = list(parser.parse_from_results(results))
https://python.langchain.com/docs/integrations/document_loaders/weather/
## Weather > [OpenWeatherMap](https://openweathermap.org/) is an open-source weather service provider This loader fetches the weather data from the OpenWeatherMap’s OneCall API, using the pyowm Python package. You must initialize the loader with your OpenWeatherMap API token and the names of the cities you want the weather data for. ``` from langchain_community.document_loaders import WeatherDataLoader ``` ``` %pip install --upgrade --quiet pyowm ``` ``` # Set API key either by passing it in to constructor directly# or by setting the environment variable "OPENWEATHERMAP_API_KEY".from getpass import getpassOPENWEATHERMAP_API_KEY = getpass() ``` ``` loader = WeatherDataLoader.from_params( ["chennai", "vellore"], openweathermap_api_key=OPENWEATHERMAP_API_KEY) ``` ``` documents = loader.load()documents ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:46.027Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/weather/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/weather/", "description": "OpenWeatherMap is an open-source", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3479", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"weather\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:45 GMT", "etag": "W/\"445edc6dfcedc9d267fcc0f9b9f4693d\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::zdbfw-1713753585155-dde4aaf63fd7" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/weather/", "property": "og:url" }, { "content": "Weather | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "OpenWeatherMap is an open-source", "property": "og:description" } ], "title": "Weather | 🦜️🔗 LangChain" }
Weather OpenWeatherMap is an open-source weather service provider This loader fetches the weather data from the OpenWeatherMap’s OneCall API, using the pyowm Python package. You must initialize the loader with your OpenWeatherMap API token and the names of the cities you want the weather data for. from langchain_community.document_loaders import WeatherDataLoader %pip install --upgrade --quiet pyowm # Set API key either by passing it in to constructor directly # or by setting the environment variable "OPENWEATHERMAP_API_KEY". from getpass import getpass OPENWEATHERMAP_API_KEY = getpass() loader = WeatherDataLoader.from_params( ["chennai", "vellore"], openweathermap_api_key=OPENWEATHERMAP_API_KEY ) documents = loader.load() documents Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/vsdx/
A [visio file](https://fr.wikipedia.org/wiki/Microsoft_Visio) (with extension .vsdx) is associated with Microsoft Visio, a diagram creation software. It stores information about the structure, layout, and graphical elements of a diagram. This format facilitates the creation and sharing of visualizations in areas such as business, engineering, and computer science. A Visio file can contain multiple pages. Some of them may serve as the background for others, and this can occur across multiple layers. This **loader** extracts the textual content from each page and its associated pages, enabling the extraction of all visible text from each page, similar to what an OCR algorithm would do. **WARNING** : Only Visio files with the **.vsdx** extension are compatible with this loader. Files with extensions such as .vsd, … are not compatible because they cannot be converted to compressed XML. ``` ------ Page 0 ------Title page : SummarySource : ./example_data/fake.vsdx==> CONTENT <== Created byCreated theModified byModified theVersionTitleFlorian MOREL2024-01-14FLORIAN MorelToday0.0.0.0.0.1This is a titleBest Caption of the worlThis is an arrowThis is EarthThis is a bounded arrow------ Page 1 ------Title page : GlossarySource : ./example_data/fake.vsdx==> CONTENT <== Created byCreated theModified byModified theVersionTitleFlorian MOREL2024-01-14FLORIAN MorelToday0.0.0.0.0.1This is a title------ Page 2 ------Title page : blanket pageSource : ./example_data/fake.vsdx==> CONTENT <== Created byCreated theModified byModified theVersionTitleFlorian MOREL2024-01-14FLORIAN MorelToday0.0.0.0.0.1This is a titleThis file is a vsdx fileFirst textSecond textThird text------ Page 3 ------Title page : BLABLABLASource : ./example_data/fake.vsdx==> CONTENT <== Created byCreated theModified byModified theVersionTitleFlorian MOREL2024-01-14FLORIAN MorelToday0.0.0.0.0.1This is a titleAnother RED arrow wowArrow with point but redGreen lineUserCaptionsRed arrow magic !Something whiteSomething RedThis a a completly useless diagramm, cool !!But this is for example !This diagramm is a base of many pages in this file. But it is editable in file \"BG WITH CONTENT\"This is a page with something...WAW I have learned something !This is a page with something...WAW I have learned something !X2------ Page 4 ------Title page : What a page !!Source : ./example_data/fake.vsdx==> CONTENT <== Created byCreated theModified byModified theVersionTitleFlorian MOREL2024-01-14FLORIAN MorelToday0.0.0.0.0.1This is a titleSomething whiteSomething RedThis a a completly useless diagramm, cool !!But this is for example !This diagramm is a base of many pages in this file. But it is editable in file \"BG WITH CONTENT\"Another RED arrow wowArrow with point but redGreen lineUserCaptionsRed arrow magic !------ Page 5 ------Title page : next page after previous oneSource : ./example_data/fake.vsdx==> CONTENT <== Created byCreated theModified byModified theVersionTitleFlorian MOREL2024-01-14FLORIAN MorelToday0.0.0.0.0.1This is a titleAnother RED arrow wowArrow with point but redGreen lineUserCaptionsRed arrow magic !Something whiteSomething RedThis a a completly useless diagramm, cool !!But this is for example !This diagramm is a base of many pages in this file. But it is editable in file \"BG WITH CONTENT\"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0-\u00a0incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit involuptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa*qui officia deserunt mollit anim id est laborum.------ Page 6 ------Title page : Connector PageSource : ./example_data/fake.vsdx==> CONTENT <== Created byCreated theModified byModified theVersionTitleFlorian MOREL2024-01-14FLORIAN MorelToday0.0.0.0.0.1This is a titleSomething whiteSomething RedThis a a completly useless diagramm, cool !!But this is for example !This diagramm is a base of many pages in this file. But it is editable in file \"BG WITH CONTENT\"------ Page 7 ------Title page : Useful ↔ Useless pageSource : ./example_data/fake.vsdx==> CONTENT <== Created byCreated theModified byModified theVersionTitleFlorian MOREL2024-01-14FLORIAN MorelToday0.0.0.0.0.1This is a titleSomething whiteSomething RedThis a a completly useless diagramm, cool !!But this is for example !This diagramm is a base of many pages in this file. But it is editable in file \"BG WITH CONTENT\"Title of this document : BLABLABLA------ Page 8 ------Title page : Alone pageSource : ./example_data/fake.vsdx==> CONTENT <== Black cloudUnidirectional traffic primary pathUnidirectional traffic backup pathEncapsulationUserCaptionsBidirectional trafficAlone, sadTest of another pageThis is a \"bannier\"Tests of some exotics characters :\u00a0\u00e3\u00e4\u00e5\u0101\u0103 \u00fc\u2554\u00a0 \u00a0\u00bc \u00c7 \u25d8\u25cb\u2642\u266b\u2640\u00ee\u2665This is ethernetLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.This is an empty caseLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0-\u00a0 incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa *qui officia deserunt mollit anim id est laborum.------ Page 9 ------Title page : BGSource : ./example_data/fake.vsdx==> CONTENT <== Best Caption of the worlThis is an arrowThis is EarthThis is a bounded arrowCreated byCreated theModified byModified theVersionTitleFlorian MOREL2024-01-14FLORIAN MorelToday0.0.0.0.0.1This is a title------ Page 10 ------Title page : BG + caption1Source : ./example_data/fake.vsdx==> CONTENT <== Created byCreated theModified byModified theVersionTitleFlorian MOREL2024-01-14FLORIAN MorelToday0.0.0.0.0.1This is a titleAnother RED arrow wowArrow with point but redGreen lineUserCaptionsRed arrow magic !Something whiteSomething RedThis a a completly useless diagramm, cool !!But this is for example !This diagramm is a base of many pages in this file. But it is editable in file \"BG WITH CONTENT\"Useful\u2194 Useless page\u00a0Tests of some exotics characters :\u00a0\u00e3\u00e4\u00e5\u0101\u0103 \u00fc\u2554\u00a0\u00a0\u00bc \u00c7 \u25d8\u25cb\u2642\u266b\u2640\u00ee\u2665------ Page 11 ------Title page : BG+Source : ./example_data/fake.vsdx==> CONTENT <== Created byCreated theModified byModified theVersionTitleFlorian MOREL2024-01-14FLORIAN MorelToday0.0.0.0.0.1This is a title------ Page 12 ------Title page : BG WITH CONTENTSource : ./example_data/fake.vsdx==> CONTENT <== Created byCreated theModified byModified theVersionTitleFlorian MOREL2024-01-14FLORIAN MorelToday0.0.0.0.0.1This is a titleLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. - Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.This is a page with a lot of text------ Page 13 ------Title page : 2nd caption with ____________________________________________________________________ contentSource : ./example_data/fake.vsdx==> CONTENT <== Created byCreated theModified byModified theVersionTitleFlorian MOREL2024-01-14FLORIAN MorelToday0.0.0.0.0.1This is a titleAnother RED arrow wowArrow with point but redGreen lineUserCaptionsRed arrow magic !Something whiteSomething RedThis a a completly useless diagramm, cool !!But this is for example !This diagramm is a base of many pages in this file. But it is editable in file \"BG WITH CONTENT\"Only connectors on this page. This is the CoNNeCtor page ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:45.693Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/vsdx/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/vsdx/", "description": "A visio file (with", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3478", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"vsdx\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:44 GMT", "etag": "W/\"3574fcb2b34c975d036ea4d04d290ed6\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::dkdrz-1713753584799-aa44e8960d37" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/vsdx/", "property": "og:url" }, { "content": "Vsdx | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "A visio file (with", "property": "og:description" } ], "title": "Vsdx | 🦜️🔗 LangChain" }
A visio file (with extension .vsdx) is associated with Microsoft Visio, a diagram creation software. It stores information about the structure, layout, and graphical elements of a diagram. This format facilitates the creation and sharing of visualizations in areas such as business, engineering, and computer science. A Visio file can contain multiple pages. Some of them may serve as the background for others, and this can occur across multiple layers. This loader extracts the textual content from each page and its associated pages, enabling the extraction of all visible text from each page, similar to what an OCR algorithm would do. WARNING : Only Visio files with the .vsdx extension are compatible with this loader. Files with extensions such as .vsd, … are not compatible because they cannot be converted to compressed XML. ------ Page 0 ------ Title page : Summary Source : ./example_data/fake.vsdx ==> CONTENT <== Created by Created the Modified by Modified the Version Title Florian MOREL 2024-01-14 FLORIAN Morel Today 0.0.0.0.0.1 This is a title Best Caption of the worl This is an arrow This is Earth This is a bounded arrow ------ Page 1 ------ Title page : Glossary Source : ./example_data/fake.vsdx ==> CONTENT <== Created by Created the Modified by Modified the Version Title Florian MOREL 2024-01-14 FLORIAN Morel Today 0.0.0.0.0.1 This is a title ------ Page 2 ------ Title page : blanket page Source : ./example_data/fake.vsdx ==> CONTENT <== Created by Created the Modified by Modified the Version Title Florian MOREL 2024-01-14 FLORIAN Morel Today 0.0.0.0.0.1 This is a title This file is a vsdx file First text Second text Third text ------ Page 3 ------ Title page : BLABLABLA Source : ./example_data/fake.vsdx ==> CONTENT <== Created by Created the Modified by Modified the Version Title Florian MOREL 2024-01-14 FLORIAN Morel Today 0.0.0.0.0.1 This is a title Another RED arrow wow Arrow with point but red Green line User Captions Red arrow magic ! Something white Something Red This a a completly useless diagramm, cool !! But this is for example ! This diagramm is a base of many pages in this file. But it is editable in file \"BG WITH CONTENT\" This is a page with something... WAW I have learned something ! This is a page with something... WAW I have learned something ! X2 ------ Page 4 ------ Title page : What a page !! Source : ./example_data/fake.vsdx ==> CONTENT <== Created by Created the Modified by Modified the Version Title Florian MOREL 2024-01-14 FLORIAN Morel Today 0.0.0.0.0.1 This is a title Something white Something Red This a a completly useless diagramm, cool !! But this is for example ! This diagramm is a base of many pages in this file. But it is editable in file \"BG WITH CONTENT\" Another RED arrow wow Arrow with point but red Green line User Captions Red arrow magic ! ------ Page 5 ------ Title page : next page after previous one Source : ./example_data/fake.vsdx ==> CONTENT <== Created by Created the Modified by Modified the Version Title Florian MOREL 2024-01-14 FLORIAN Morel Today 0.0.0.0.0.1 This is a title Another RED arrow wow Arrow with point but red Green line User Captions Red arrow magic ! Something white Something Red This a a completly useless diagramm, cool !! But this is for example ! This diagramm is a base of many pages in this file. But it is editable in file \"BG WITH CONTENT\" Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0-\u00a0incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa * qui officia deserunt mollit anim id est laborum. ------ Page 6 ------ Title page : Connector Page Source : ./example_data/fake.vsdx ==> CONTENT <== Created by Created the Modified by Modified the Version Title Florian MOREL 2024-01-14 FLORIAN Morel Today 0.0.0.0.0.1 This is a title Something white Something Red This a a completly useless diagramm, cool !! But this is for example ! This diagramm is a base of many pages in this file. But it is editable in file \"BG WITH CONTENT\" ------ Page 7 ------ Title page : Useful ↔ Useless page Source : ./example_data/fake.vsdx ==> CONTENT <== Created by Created the Modified by Modified the Version Title Florian MOREL 2024-01-14 FLORIAN Morel Today 0.0.0.0.0.1 This is a title Something white Something Red This a a completly useless diagramm, cool !! But this is for example ! This diagramm is a base of many pages in this file. But it is editable in file \"BG WITH CONTENT\" Title of this document : BLABLABLA ------ Page 8 ------ Title page : Alone page Source : ./example_data/fake.vsdx ==> CONTENT <== Black cloud Unidirectional traffic primary path Unidirectional traffic backup path Encapsulation User Captions Bidirectional traffic Alone, sad Test of another page This is a \"bannier\" Tests of some exotics characters :\u00a0\u00e3\u00e4\u00e5\u0101\u0103 \u00fc\u2554\u00a0 \u00a0\u00bc \u00c7 \u25d8\u25cb\u2642\u266b\u2640\u00ee\u2665 This is ethernet Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. This is an empty case Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0-\u00a0 incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa * qui officia deserunt mollit anim id est laborum. ------ Page 9 ------ Title page : BG Source : ./example_data/fake.vsdx ==> CONTENT <== Best Caption of the worl This is an arrow This is Earth This is a bounded arrow Created by Created the Modified by Modified the Version Title Florian MOREL 2024-01-14 FLORIAN Morel Today 0.0.0.0.0.1 This is a title ------ Page 10 ------ Title page : BG + caption1 Source : ./example_data/fake.vsdx ==> CONTENT <== Created by Created the Modified by Modified the Version Title Florian MOREL 2024-01-14 FLORIAN Morel Today 0.0.0.0.0.1 This is a title Another RED arrow wow Arrow with point but red Green line User Captions Red arrow magic ! Something white Something Red This a a completly useless diagramm, cool !! But this is for example ! This diagramm is a base of many pages in this file. But it is editable in file \"BG WITH CONTENT\" Useful\u2194 Useless page\u00a0 Tests of some exotics characters :\u00a0\u00e3\u00e4\u00e5\u0101\u0103 \u00fc\u2554\u00a0\u00a0\u00bc \u00c7 \u25d8\u25cb\u2642\u266b\u2640\u00ee\u2665 ------ Page 11 ------ Title page : BG+ Source : ./example_data/fake.vsdx ==> CONTENT <== Created by Created the Modified by Modified the Version Title Florian MOREL 2024-01-14 FLORIAN Morel Today 0.0.0.0.0.1 This is a title ------ Page 12 ------ Title page : BG WITH CONTENT Source : ./example_data/fake.vsdx ==> CONTENT <== Created by Created the Modified by Modified the Version Title Florian MOREL 2024-01-14 FLORIAN Morel Today 0.0.0.0.0.1 This is a title Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. - Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. This is a page with a lot of text ------ Page 13 ------ Title page : 2nd caption with ____________________________________________________________________ content Source : ./example_data/fake.vsdx ==> CONTENT <== Created by Created the Modified by Modified the Version Title Florian MOREL 2024-01-14 FLORIAN Morel Today 0.0.0.0.0.1 This is a title Another RED arrow wow Arrow with point but red Green line User Captions Red arrow magic ! Something white Something Red This a a completly useless diagramm, cool !! But this is for example ! This diagramm is a base of many pages in this file. But it is editable in file \"BG WITH CONTENT\" Only connectors on this page. This is the CoNNeCtor page
https://python.langchain.com/docs/integrations/document_transformers/html2text/
## HTML to text > [html2text](https://github.com/Alir3z4/html2text/) is a Python package that converts a page of `HTML` into clean, easy-to-read plain `ASCII text`. The ASCII also happens to be a valid `Markdown` (a text-to-HTML format). ``` %pip install --upgrade --quiet html2text ``` ``` from langchain_community.document_loaders import AsyncHtmlLoaderurls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"]loader = AsyncHtmlLoader(urls)docs = loader.load() ``` ``` Fetching pages: 100%|############| 2/2 [00:00<00:00, 10.75it/s] ``` ``` from langchain_community.document_transformers import Html2TextTransformer ``` ``` urls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"]html2text = Html2TextTransformer()docs_transformed = html2text.transform_documents(docs) ``` ``` docs_transformed[0].page_content[1000:2000] ``` ``` " * ESPNFC\n\n * X Games\n\n * SEC Network\n\n## ESPN Apps\n\n * ESPN\n\n * ESPN Fantasy\n\n## Follow ESPN\n\n * Facebook\n\n * Twitter\n\n * Instagram\n\n * Snapchat\n\n * YouTube\n\n * The ESPN Daily Podcast\n\n2023 FIFA Women's World Cup\n\n## Follow live: Canada takes on Nigeria in group stage of Women's World Cup\n\n2m\n\nEPA/Morgan Hancock\n\n## TOP HEADLINES\n\n * Snyder fined $60M over findings in investigation\n * NFL owners approve $6.05B sale of Commanders\n * Jags assistant comes out as gay in NFL milestone\n * O's alone atop East after topping slumping Rays\n * ACC's Phillips: Never condoned hazing at NU\n\n * Vikings WR Addison cited for driving 140 mph\n * 'Taking his time': Patient QB Rodgers wows Jets\n * Reyna got U.S. assurances after Berhalter rehire\n * NFL Future Power Rankings\n\n## USWNT AT THE WORLD CUP\n\n### USA VS. VIETNAM: 9 P.M. ET FRIDAY\n\n## How do you defend against Alex Morgan? Former opponents sound off\n\nThe U.S. forward is unstoppable at this level, scoring 121 goals and adding 49" ``` ``` docs_transformed[1].page_content[1000:2000] ``` ``` "t's brain,\ncomplemented by several key components:\n\n * **Planning**\n * Subgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks.\n * Reflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results.\n * **Memory**\n * Short-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn.\n * Long-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval.\n * **Tool use**\n * The agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution c" ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:46.379Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_transformers/html2text/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_transformers/html2text/", "description": "html2text is a Python package", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4407", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"html2text\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:45 GMT", "etag": "W/\"bc7baed6687d6624262ac578b7875fc2\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::dkxrp-1713753585253-136726c8ef28" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_transformers/html2text/", "property": "og:url" }, { "content": "HTML to text | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "html2text is a Python package", "property": "og:description" } ], "title": "HTML to text | 🦜️🔗 LangChain" }
HTML to text html2text is a Python package that converts a page of HTML into clean, easy-to-read plain ASCII text. The ASCII also happens to be a valid Markdown (a text-to-HTML format). %pip install --upgrade --quiet html2text from langchain_community.document_loaders import AsyncHtmlLoader urls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"] loader = AsyncHtmlLoader(urls) docs = loader.load() Fetching pages: 100%|############| 2/2 [00:00<00:00, 10.75it/s] from langchain_community.document_transformers import Html2TextTransformer urls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"] html2text = Html2TextTransformer() docs_transformed = html2text.transform_documents(docs) docs_transformed[0].page_content[1000:2000] " * ESPNFC\n\n * X Games\n\n * SEC Network\n\n## ESPN Apps\n\n * ESPN\n\n * ESPN Fantasy\n\n## Follow ESPN\n\n * Facebook\n\n * Twitter\n\n * Instagram\n\n * Snapchat\n\n * YouTube\n\n * The ESPN Daily Podcast\n\n2023 FIFA Women's World Cup\n\n## Follow live: Canada takes on Nigeria in group stage of Women's World Cup\n\n2m\n\nEPA/Morgan Hancock\n\n## TOP HEADLINES\n\n * Snyder fined $60M over findings in investigation\n * NFL owners approve $6.05B sale of Commanders\n * Jags assistant comes out as gay in NFL milestone\n * O's alone atop East after topping slumping Rays\n * ACC's Phillips: Never condoned hazing at NU\n\n * Vikings WR Addison cited for driving 140 mph\n * 'Taking his time': Patient QB Rodgers wows Jets\n * Reyna got U.S. assurances after Berhalter rehire\n * NFL Future Power Rankings\n\n## USWNT AT THE WORLD CUP\n\n### USA VS. VIETNAM: 9 P.M. ET FRIDAY\n\n## How do you defend against Alex Morgan? Former opponents sound off\n\nThe U.S. forward is unstoppable at this level, scoring 121 goals and adding 49" docs_transformed[1].page_content[1000:2000] "t's brain,\ncomplemented by several key components:\n\n * **Planning**\n * Subgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks.\n * Reflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results.\n * **Memory**\n * Short-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn.\n * Long-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval.\n * **Tool use**\n * The agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution c" Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_transformers/google_translate/
## Google Translate [Google Translate](https://translate.google.com/) is a multilingual neural machine translation service developed by Google to translate text, documents and websites from one language into another. The `GoogleTranslateTransformer` allows you to translate text and HTML with the [Google Cloud Translation API](https://cloud.google.com/translate). To use it, you should have the `google-cloud-translate` python package installed, and a Google Cloud project with the [Translation API enabled](https://cloud.google.com/translate/docs/setup). This transformer uses the [Advanced edition (v3)](https://cloud.google.com/translate/docs/intro-to-v3). * [Google Neural Machine Translation](https://en.wikipedia.org/wiki/Google_Neural_Machine_Translation) * [A Neural Network for Machine Translation, at Production Scale](https://blog.research.google/2016/09/a-neural-network-for-machine.html) ``` %pip install --upgrade --quiet google-cloud-translate ``` ``` from langchain_community.document_transformers import GoogleTranslateTransformerfrom langchain_core.documents import Document ``` ## Input[​](#input "Direct link to Input") This is the document we’ll translate ``` sample_text = """[Generated with Google Bard]Subject: Key Business Process UpdatesDate: Friday, 27 October 2023Dear team,I am writing to provide an update on some of our key business processes.Sales processWe have recently implemented a new sales process that is designed to help us close more deals and grow our revenue. The new process includes a more rigorous qualification process, a more streamlined proposal process, and a more effective customer relationship management (CRM) system.Marketing processWe have also revamped our marketing process to focus on creating more targeted and engaging content. We are also using more social media and paid advertising to reach a wider audience.Customer service processWe have also made some improvements to our customer service process. We have implemented a new customer support system that makes it easier for customers to get help with their problems. We have also hired more customer support representatives to reduce wait times.Overall, we are very pleased with the progress we have made on improving our key business processes. We believe that these changes will help us to achieve our goals of growing our business and providing our customers with the best possible experience.If you have any questions or feedback about any of these changes, please feel free to contact me directly.Thank you,Lewis CymbalCEO, Cymbal Bank""" ``` When initializing the `GoogleTranslateTransformer`, you can include the following parameters to configure the requests. * `project_id`: Google Cloud Project ID. * `location`: (Optional) Translate model location. * Default: `global` * `model_id`: (Optional) Translate [model ID](https://cloud.google.com/translate/docs/advanced/translating-text-v3#comparing-models) to use. * `glossary_id`: (Optional) Translate [glossary ID](https://cloud.google.com/translate/docs/advanced/glossary) to use. * `api_endpoint`: (Optional) [Regional endpoint](https://cloud.google.com/translate/docs/advanced/endpoints) to use. ``` documents = [Document(page_content=sample_text)]translator = GoogleTranslateTransformer(project_id="<YOUR_PROJECT_ID>") ``` ## Output[​](#output "Direct link to Output") After translating a document, the result will be returned as a new document with the `page_content` translated into the target language. You can provide the following keyword parameters to the `transform_documents()` method: * `target_language_code`: [ISO 639](https://en.wikipedia.org/wiki/ISO_639) language code of the output document. * For supported languages, refer to [Language support](https://cloud.google.com/translate/docs/languages). * `source_language_code`: (Optional) [ISO 639](https://en.wikipedia.org/wiki/ISO_639) language code of the input document. * If not provided, language will be auto-detected. * `mime_type`: (Optional) [Media Type](https://en.wikipedia.org/wiki/Media_type) of the input text. * Options: `text/plain` (Default), `text/html`. ``` translated_documents = translator.transform_documents( documents, target_language_code="es") ``` ``` for doc in translated_documents: print(doc.metadata) print(doc.page_content) ``` ``` {'model': '', 'detected_language_code': 'en'}[Generado con Google Bard]Asunto: Actualizaciones clave de procesos comercialesFecha: viernes 27 de octubre de 2023Estimado equipo,Le escribo para brindarle una actualización sobre algunos de nuestros procesos comerciales clave.Proceso de ventasRecientemente implementamos un nuevo proceso de ventas que está diseñado para ayudarnos a cerrar más acuerdos y aumentar nuestros ingresos. El nuevo proceso incluye un proceso de calificación más riguroso, un proceso de propuesta más simplificado y un sistema de gestión de relaciones con el cliente (CRM) más eficaz.Proceso de mercadeoTambién hemos renovado nuestro proceso de marketing para centrarnos en crear contenido más específico y atractivo. También estamos utilizando más redes sociales y publicidad paga para llegar a una audiencia más amplia.proceso de atención al clienteTambién hemos realizado algunas mejoras en nuestro proceso de atención al cliente. Hemos implementado un nuevo sistema de atención al cliente que facilita que los clientes obtengan ayuda con sus problemas. También hemos contratado más representantes de atención al cliente para reducir los tiempos de espera.En general, estamos muy satisfechos con el progreso que hemos logrado en la mejora de nuestros procesos comerciales clave. Creemos que estos cambios nos ayudarán a lograr nuestros objetivos de hacer crecer nuestro negocio y brindar a nuestros clientes la mejor experiencia posible.Si tiene alguna pregunta o comentario sobre cualquiera de estos cambios, no dude en ponerse en contacto conmigo directamente.Gracias,Platillo LewisDirector ejecutivo, banco de platillos ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:46.212Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_transformers/google_translate/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_transformers/google_translate/", "description": "Google Translate is a multilingual", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_translate\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:45 GMT", "etag": "W/\"c054b9f08f4b0d6ca7b606ba9468f034\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::5nsvl-1713753585183-3e46b520e352" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_transformers/google_translate/", "property": "og:url" }, { "content": "Google Translate | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Google Translate is a multilingual", "property": "og:description" } ], "title": "Google Translate | 🦜️🔗 LangChain" }
Google Translate Google Translate is a multilingual neural machine translation service developed by Google to translate text, documents and websites from one language into another. The GoogleTranslateTransformer allows you to translate text and HTML with the Google Cloud Translation API. To use it, you should have the google-cloud-translate python package installed, and a Google Cloud project with the Translation API enabled. This transformer uses the Advanced edition (v3). Google Neural Machine Translation A Neural Network for Machine Translation, at Production Scale %pip install --upgrade --quiet google-cloud-translate from langchain_community.document_transformers import GoogleTranslateTransformer from langchain_core.documents import Document Input​ This is the document we’ll translate sample_text = """[Generated with Google Bard] Subject: Key Business Process Updates Date: Friday, 27 October 2023 Dear team, I am writing to provide an update on some of our key business processes. Sales process We have recently implemented a new sales process that is designed to help us close more deals and grow our revenue. The new process includes a more rigorous qualification process, a more streamlined proposal process, and a more effective customer relationship management (CRM) system. Marketing process We have also revamped our marketing process to focus on creating more targeted and engaging content. We are also using more social media and paid advertising to reach a wider audience. Customer service process We have also made some improvements to our customer service process. We have implemented a new customer support system that makes it easier for customers to get help with their problems. We have also hired more customer support representatives to reduce wait times. Overall, we are very pleased with the progress we have made on improving our key business processes. We believe that these changes will help us to achieve our goals of growing our business and providing our customers with the best possible experience. If you have any questions or feedback about any of these changes, please feel free to contact me directly. Thank you, Lewis Cymbal CEO, Cymbal Bank """ When initializing the GoogleTranslateTransformer, you can include the following parameters to configure the requests. project_id: Google Cloud Project ID. location: (Optional) Translate model location. Default: global model_id: (Optional) Translate model ID to use. glossary_id: (Optional) Translate glossary ID to use. api_endpoint: (Optional) Regional endpoint to use. documents = [Document(page_content=sample_text)] translator = GoogleTranslateTransformer(project_id="<YOUR_PROJECT_ID>") Output​ After translating a document, the result will be returned as a new document with the page_content translated into the target language. You can provide the following keyword parameters to the transform_documents() method: target_language_code: ISO 639 language code of the output document. For supported languages, refer to Language support. source_language_code: (Optional) ISO 639 language code of the input document. If not provided, language will be auto-detected. mime_type: (Optional) Media Type of the input text. Options: text/plain (Default), text/html. translated_documents = translator.transform_documents( documents, target_language_code="es" ) for doc in translated_documents: print(doc.metadata) print(doc.page_content) {'model': '', 'detected_language_code': 'en'} [Generado con Google Bard] Asunto: Actualizaciones clave de procesos comerciales Fecha: viernes 27 de octubre de 2023 Estimado equipo, Le escribo para brindarle una actualización sobre algunos de nuestros procesos comerciales clave. Proceso de ventas Recientemente implementamos un nuevo proceso de ventas que está diseñado para ayudarnos a cerrar más acuerdos y aumentar nuestros ingresos. El nuevo proceso incluye un proceso de calificación más riguroso, un proceso de propuesta más simplificado y un sistema de gestión de relaciones con el cliente (CRM) más eficaz. Proceso de mercadeo También hemos renovado nuestro proceso de marketing para centrarnos en crear contenido más específico y atractivo. También estamos utilizando más redes sociales y publicidad paga para llegar a una audiencia más amplia. proceso de atención al cliente También hemos realizado algunas mejoras en nuestro proceso de atención al cliente. Hemos implementado un nuevo sistema de atención al cliente que facilita que los clientes obtengan ayuda con sus problemas. También hemos contratado más representantes de atención al cliente para reducir los tiempos de espera. En general, estamos muy satisfechos con el progreso que hemos logrado en la mejora de nuestros procesos comerciales clave. Creemos que estos cambios nos ayudarán a lograr nuestros objetivos de hacer crecer nuestro negocio y brindar a nuestros clientes la mejor experiencia posible. Si tiene alguna pregunta o comentario sobre cualquiera de estos cambios, no dude en ponerse en contacto conmigo directamente. Gracias, Platillo Lewis Director ejecutivo, banco de platillos
https://python.langchain.com/docs/integrations/document_loaders/web_base/
## WebBaseLoader This covers how to use `WebBaseLoader` to load all text from `HTML` webpages into a document format that we can use downstream. For more custom logic for loading webpages look at some child class examples such as `IMSDbLoader`, `AZLyricsLoader`, and `CollegeConfidentialLoader`. If you don’t want to worry about website crawling, bypassing JS-blocking sites, and data cleaning, consider using `FireCrawlLoader`. ``` from langchain_community.document_loaders import WebBaseLoader ``` ``` loader = WebBaseLoader("https://www.espn.com/") ``` To bypass SSL verification errors during fetching, you can set the “verify” option: loader.requests\_kwargs = {‘verify’:False} ``` [Document(page_content="\n\n\n\n\n\n\n\n\nESPN - Serving Sports Fans. Anytime. Anywhere.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Skip to main content\n \n\n Skip to navigation\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n<\n\n>\n\n\n\n\n\n\n\n\n\nMenuESPN\n\n\nSearch\n\n\n\nscores\n\n\n\nNFLNBANCAAMNCAAWNHLSoccer…MLBNCAAFGolfTennisSports BettingBoxingCFLNCAACricketF1HorseLLWSMMANASCARNBA G LeagueOlympic SportsRacingRN BBRN FBRugbyWNBAWorld Baseball ClassicWWEX GamesXFLMore ESPNFantasyListenWatchESPN+\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\nSUBSCRIBE NOW\n\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\n\n\n\n\nFavorites\n\n\n\n\n\n\n Manage Favorites\n \n\n\n\nCustomize ESPNSign UpLog InESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nAre you ready for Opening Day? Here's your guide to MLB's offseason chaosWait, Jacob deGrom is on the Rangers now? Xander Bogaerts and Trea Turner signed where? And what about Carlos Correa? Yeah, you're going to need to read up before Opening Day.12hESPNIllustration by ESPNEverything you missed in the MLB offseason3h2:33World Series odds, win totals, props for every teamPlay fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersLAMAR WANTS OUT OF BALTIMOREMarcus Spears identifies the two teams that need Lamar Jackson the most8h2:00Would Lamar sit out? Will Ravens draft a QB? Jackson trade request insightsLamar Jackson has asked Baltimore to trade him, but Ravens coach John Harbaugh hopes the QB will be back.3hJamison HensleyBallard, Colts will consider trading for QB JacksonJackson to Indy? Washington? Barnwell ranks the QB's trade fitsSNYDER'S TUMULTUOUS 24-YEAR RUNHow Washington’s NFL franchise sank on and off the field under owner Dan SnyderSnyder purchased one of the NFL's marquee franchises in 1999. Twenty-four years later, and with the team up for sale, he leaves a legacy of on-field futility and off-field scandal.13hJohn KeimESPNIOWA STAR STEPS UP AGAINJ-Will: Caitlin Clark is the biggest brand in college sports right now8h0:47'The better the opponent, the better she plays': Clark draws comparisons to TaurasiCaitlin Clark's performance on Sunday had longtime observers going back decades to find comparisons.16hKevin PeltonWOMEN'S ELITE EIGHT SCOREBOARDMONDAY'S GAMESCheck your bracket!NBA DRAFTHow top prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four teams, and ESPN's Jonathan Givony recaps the players who saw their NBA draft stock change.11hJonathan GivonyAndy Lyons/Getty ImagesTALKING BASKETBALLWhy AD needs to be more assertive with LeBron on the court10h1:33Why Perk won't blame Kyrie for Mavs' woes8h1:48WHERE EVERY TEAM STANDSNew NFL Power Rankings: Post-free-agency 1-32 poll, plus underrated offseason movesThe free agent frenzy has come and gone. Which teams have improved their 2023 outlook, and which teams have taken a hit?12hNFL Nation reportersIllustration by ESPNTHE BUCK STOPS WITH BELICHICKBruschi: Fair to criticize Bill Belichick for Patriots' struggles10h1:27 Top HeadlinesQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasy HomeCustomize ESPNSign UpLog InMarch Madness LiveESPNMarch Madness LiveWatch every men's NCAA tournament game live! ICYMI1:42Austin Peay's coach, pitcher and catcher all ejected after retaliation pitchAustin Peay's pitcher, catcher and coach were all ejected after a pitch was thrown at Liberty's Nathan Keeter, who earlier in the game hit a home run and celebrated while running down the third-base line. Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the 2023 Men's Tournament Challenge, which you can follow throughout the Big Dance. Women's Tournament ChallengeIllustration by ESPNWomen's Tournament ChallengeCheck your bracket(s) in the 2023 Women's Tournament Challenge, which you can follow throughout the Big Dance. Best of ESPN+AP Photo/Lynne SladkyFantasy Baseball ESPN+ Cheat Sheet: Sleepers, busts, rookies and closersYou've read their names all preseason long, it'd be a shame to forget them on draft day. The ESPN+ Cheat Sheet is one way to make sure that doesn't happen.Steph Chambers/Getty ImagesPassan's 2023 MLB season preview: Bold predictions and moreOpening Day is just over a week away -- and Jeff Passan has everything you need to know covered from every possible angle.Photo by Bob Kupbens/Icon Sportswire2023 NFL free agency: Best team fits for unsigned playersWhere could Ezekiel Elliott land? Let's match remaining free agents to teams and find fits for two trade candidates.Illustration by ESPN2023 NFL mock draft: Mel Kiper's first-round pick predictionsMel Kiper Jr. makes his predictions for Round 1 of the NFL draft, including projecting a trade in the top five. Trending NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points, milestonesThe B's are on pace for NHL records in wins and points, along with some individual superlatives as well. Follow along here with our updated tracker.Mandatory Credit: William Purnell-USA TODAY Sports2023 NFL full draft order: AFC, NFC team picks for all roundsStarting with the Carolina Panthers at No. 1 overall, here's the entire 2023 NFL draft broken down round by round. How to Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results, bracket, how to watchThe matchups in Tampa promise to be thrillers, featuring plenty of star power, high-octane offense and stellar defense.(AP Photo/Koji Sasahara, File)How to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN, ESPN+Here's everything you need to know about how to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN and ESPN+.Hailie Lynch/XFLHow to watch the XFL: 2023 schedule, teams, players, news, moreEvery XFL game will be streamed on ESPN+. Find out when and where else you can watch the eight teams compete. Sign up to play the #1 Fantasy Baseball GameReactivate A LeagueCreate A LeagueJoin a Public LeaguePractice With a Mock DraftSports BettingAP Photo/Mike KropfMarch Madness betting 2023: Bracket odds, lines, tips, moreThe 2023 NCAA tournament brackets have finally been released, and we have everything you need to know to make a bet on all of the March Madness games. Sign up to play the #1 Fantasy game!Create A LeagueJoin Public LeagueReactivateMock Draft Now\n\nESPN+\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\nESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nTerms of UsePrivacy PolicyYour US State Privacy RightsChildren's Online Privacy PolicyInterest-Based AdsAbout Nielsen MeasurementDo Not Sell or Share My Personal InformationContact UsDisney Ad Sales SiteWork for ESPNCopyright: © ESPN Enterprises, Inc. All rights reserved.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", lookup_str='', metadata={'source': 'https://www.espn.com/'}, lookup_index=0)] ``` ``` """# Use this piece of code for testing new custom BeautifulSoup parsersimport requestsfrom bs4 import BeautifulSouphtml_doc = requests.get("{INSERT_NEW_URL_HERE}")soup = BeautifulSoup(html_doc.text, 'html.parser')# Beautiful soup logic to be exported to langchain_community.document_loaders.webpage.py# Example: transcript = soup.select_one("td[class='scrtext']").text# BS4 documentation can be found here: https://www.crummy.com/software/BeautifulSoup/bs4/doc/""" ``` ## Loading multiple webpages[​](#loading-multiple-webpages "Direct link to Loading multiple webpages") You can also load multiple webpages at once by passing in a list of urls to the loader. This will return a list of documents in the same order as the urls passed in. ``` loader = WebBaseLoader(["https://www.espn.com/", "https://google.com"])docs = loader.load()docs ``` ``` [Document(page_content="\n\n\n\n\n\n\n\n\nESPN - Serving Sports Fans. Anytime. Anywhere.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Skip to main content\n \n\n Skip to navigation\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n<\n\n>\n\n\n\n\n\n\n\n\n\nMenuESPN\n\n\nSearch\n\n\n\nscores\n\n\n\nNFLNBANCAAMNCAAWNHLSoccer…MLBNCAAFGolfTennisSports BettingBoxingCFLNCAACricketF1HorseLLWSMMANASCARNBA G LeagueOlympic SportsRacingRN BBRN FBRugbyWNBAWorld Baseball ClassicWWEX GamesXFLMore ESPNFantasyListenWatchESPN+\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\nSUBSCRIBE NOW\n\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\n\n\n\n\nFavorites\n\n\n\n\n\n\n Manage Favorites\n \n\n\n\nCustomize ESPNSign UpLog InESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nAre you ready for Opening Day? Here's your guide to MLB's offseason chaosWait, Jacob deGrom is on the Rangers now? Xander Bogaerts and Trea Turner signed where? And what about Carlos Correa? Yeah, you're going to need to read up before Opening Day.12hESPNIllustration by ESPNEverything you missed in the MLB offseason3h2:33World Series odds, win totals, props for every teamPlay fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersLAMAR WANTS OUT OF BALTIMOREMarcus Spears identifies the two teams that need Lamar Jackson the most7h2:00Would Lamar sit out? Will Ravens draft a QB? Jackson trade request insightsLamar Jackson has asked Baltimore to trade him, but Ravens coach John Harbaugh hopes the QB will be back.3hJamison HensleyBallard, Colts will consider trading for QB JacksonJackson to Indy? Washington? Barnwell ranks the QB's trade fitsSNYDER'S TUMULTUOUS 24-YEAR RUNHow Washington’s NFL franchise sank on and off the field under owner Dan SnyderSnyder purchased one of the NFL's marquee franchises in 1999. Twenty-four years later, and with the team up for sale, he leaves a legacy of on-field futility and off-field scandal.13hJohn KeimESPNIOWA STAR STEPS UP AGAINJ-Will: Caitlin Clark is the biggest brand in college sports right now8h0:47'The better the opponent, the better she plays': Clark draws comparisons to TaurasiCaitlin Clark's performance on Sunday had longtime observers going back decades to find comparisons.16hKevin PeltonWOMEN'S ELITE EIGHT SCOREBOARDMONDAY'S GAMESCheck your bracket!NBA DRAFTHow top prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four teams, and ESPN's Jonathan Givony recaps the players who saw their NBA draft stock change.11hJonathan GivonyAndy Lyons/Getty ImagesTALKING BASKETBALLWhy AD needs to be more assertive with LeBron on the court9h1:33Why Perk won't blame Kyrie for Mavs' woes8h1:48WHERE EVERY TEAM STANDSNew NFL Power Rankings: Post-free-agency 1-32 poll, plus underrated offseason movesThe free agent frenzy has come and gone. Which teams have improved their 2023 outlook, and which teams have taken a hit?12hNFL Nation reportersIllustration by ESPNTHE BUCK STOPS WITH BELICHICKBruschi: Fair to criticize Bill Belichick for Patriots' struggles10h1:27 Top HeadlinesQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasy HomeCustomize ESPNSign UpLog InMarch Madness LiveESPNMarch Madness LiveWatch every men's NCAA tournament game live! ICYMI1:42Austin Peay's coach, pitcher and catcher all ejected after retaliation pitchAustin Peay's pitcher, catcher and coach were all ejected after a pitch was thrown at Liberty's Nathan Keeter, who earlier in the game hit a home run and celebrated while running down the third-base line. Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the 2023 Men's Tournament Challenge, which you can follow throughout the Big Dance. Women's Tournament ChallengeIllustration by ESPNWomen's Tournament ChallengeCheck your bracket(s) in the 2023 Women's Tournament Challenge, which you can follow throughout the Big Dance. Best of ESPN+AP Photo/Lynne SladkyFantasy Baseball ESPN+ Cheat Sheet: Sleepers, busts, rookies and closersYou've read their names all preseason long, it'd be a shame to forget them on draft day. The ESPN+ Cheat Sheet is one way to make sure that doesn't happen.Steph Chambers/Getty ImagesPassan's 2023 MLB season preview: Bold predictions and moreOpening Day is just over a week away -- and Jeff Passan has everything you need to know covered from every possible angle.Photo by Bob Kupbens/Icon Sportswire2023 NFL free agency: Best team fits for unsigned playersWhere could Ezekiel Elliott land? Let's match remaining free agents to teams and find fits for two trade candidates.Illustration by ESPN2023 NFL mock draft: Mel Kiper's first-round pick predictionsMel Kiper Jr. makes his predictions for Round 1 of the NFL draft, including projecting a trade in the top five. Trending NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points, milestonesThe B's are on pace for NHL records in wins and points, along with some individual superlatives as well. Follow along here with our updated tracker.Mandatory Credit: William Purnell-USA TODAY Sports2023 NFL full draft order: AFC, NFC team picks for all roundsStarting with the Carolina Panthers at No. 1 overall, here's the entire 2023 NFL draft broken down round by round. How to Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results, bracket, how to watchThe matchups in Tampa promise to be thrillers, featuring plenty of star power, high-octane offense and stellar defense.(AP Photo/Koji Sasahara, File)How to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN, ESPN+Here's everything you need to know about how to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN and ESPN+.Hailie Lynch/XFLHow to watch the XFL: 2023 schedule, teams, players, news, moreEvery XFL game will be streamed on ESPN+. Find out when and where else you can watch the eight teams compete. Sign up to play the #1 Fantasy Baseball GameReactivate A LeagueCreate A LeagueJoin a Public LeaguePractice With a Mock DraftSports BettingAP Photo/Mike KropfMarch Madness betting 2023: Bracket odds, lines, tips, moreThe 2023 NCAA tournament brackets have finally been released, and we have everything you need to know to make a bet on all of the March Madness games. Sign up to play the #1 Fantasy game!Create A LeagueJoin Public LeagueReactivateMock Draft Now\n\nESPN+\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\nESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nTerms of UsePrivacy PolicyYour US State Privacy RightsChildren's Online Privacy PolicyInterest-Based AdsAbout Nielsen MeasurementDo Not Sell or Share My Personal InformationContact UsDisney Ad Sales SiteWork for ESPNCopyright: © ESPN Enterprises, Inc. All rights reserved.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", lookup_str='', metadata={'source': 'https://www.espn.com/'}, lookup_index=0), Document(page_content='GoogleSearch Images Maps Play YouTube News Gmail Drive More »Web History | Settings | Sign in\xa0Advanced searchAdvertisingBusiness SolutionsAbout Google© 2023 - Privacy - Terms ', lookup_str='', metadata={'source': 'https://google.com'}, lookup_index=0)] ``` ### Load multiple urls concurrently[​](#load-multiple-urls-concurrently "Direct link to Load multiple urls concurrently") You can speed up the scraping process by scraping and parsing multiple urls concurrently. There are reasonable limits to concurrent requests, defaulting to 2 per second. If you aren’t concerned about being a good citizen, or you control the server you are scraping and don’t care about load, you can change the `requests_per_second` parameter to increase the max concurrent requests. Note, while this will speed up the scraping process, but may cause the server to block you. Be careful! ``` %pip install --upgrade --quiet nest_asyncio# fixes a bug with asyncio and jupyterimport nest_asyncionest_asyncio.apply() ``` ``` Requirement already satisfied: nest_asyncio in /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages (1.5.6) ``` ``` loader = WebBaseLoader(["https://www.espn.com/", "https://google.com"])loader.requests_per_second = 1docs = loader.aload()docs ``` ``` [Document(page_content="\n\n\n\n\n\n\n\n\nESPN - Serving Sports Fans. Anytime. Anywhere.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Skip to main content\n \n\n Skip to navigation\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n<\n\n>\n\n\n\n\n\n\n\n\n\nMenuESPN\n\n\nSearch\n\n\n\nscores\n\n\n\nNFLNBANCAAMNCAAWNHLSoccer…MLBNCAAFGolfTennisSports BettingBoxingCFLNCAACricketF1HorseLLWSMMANASCARNBA G LeagueOlympic SportsRacingRN BBRN FBRugbyWNBAWorld Baseball ClassicWWEX GamesXFLMore ESPNFantasyListenWatchESPN+\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\nSUBSCRIBE NOW\n\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\n\n\n\n\nFavorites\n\n\n\n\n\n\n Manage Favorites\n \n\n\n\nCustomize ESPNSign UpLog InESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nAre you ready for Opening Day? Here's your guide to MLB's offseason chaosWait, Jacob deGrom is on the Rangers now? Xander Bogaerts and Trea Turner signed where? And what about Carlos Correa? Yeah, you're going to need to read up before Opening Day.12hESPNIllustration by ESPNEverything you missed in the MLB offseason3h2:33World Series odds, win totals, props for every teamPlay fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersLAMAR WANTS OUT OF BALTIMOREMarcus Spears identifies the two teams that need Lamar Jackson the most7h2:00Would Lamar sit out? Will Ravens draft a QB? Jackson trade request insightsLamar Jackson has asked Baltimore to trade him, but Ravens coach John Harbaugh hopes the QB will be back.3hJamison HensleyBallard, Colts will consider trading for QB JacksonJackson to Indy? Washington? Barnwell ranks the QB's trade fitsSNYDER'S TUMULTUOUS 24-YEAR RUNHow Washington’s NFL franchise sank on and off the field under owner Dan SnyderSnyder purchased one of the NFL's marquee franchises in 1999. Twenty-four years later, and with the team up for sale, he leaves a legacy of on-field futility and off-field scandal.13hJohn KeimESPNIOWA STAR STEPS UP AGAINJ-Will: Caitlin Clark is the biggest brand in college sports right now8h0:47'The better the opponent, the better she plays': Clark draws comparisons to TaurasiCaitlin Clark's performance on Sunday had longtime observers going back decades to find comparisons.16hKevin PeltonWOMEN'S ELITE EIGHT SCOREBOARDMONDAY'S GAMESCheck your bracket!NBA DRAFTHow top prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four teams, and ESPN's Jonathan Givony recaps the players who saw their NBA draft stock change.11hJonathan GivonyAndy Lyons/Getty ImagesTALKING BASKETBALLWhy AD needs to be more assertive with LeBron on the court9h1:33Why Perk won't blame Kyrie for Mavs' woes8h1:48WHERE EVERY TEAM STANDSNew NFL Power Rankings: Post-free-agency 1-32 poll, plus underrated offseason movesThe free agent frenzy has come and gone. Which teams have improved their 2023 outlook, and which teams have taken a hit?12hNFL Nation reportersIllustration by ESPNTHE BUCK STOPS WITH BELICHICKBruschi: Fair to criticize Bill Belichick for Patriots' struggles10h1:27 Top HeadlinesQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasy HomeCustomize ESPNSign UpLog InMarch Madness LiveESPNMarch Madness LiveWatch every men's NCAA tournament game live! ICYMI1:42Austin Peay's coach, pitcher and catcher all ejected after retaliation pitchAustin Peay's pitcher, catcher and coach were all ejected after a pitch was thrown at Liberty's Nathan Keeter, who earlier in the game hit a home run and celebrated while running down the third-base line. Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the 2023 Men's Tournament Challenge, which you can follow throughout the Big Dance. Women's Tournament ChallengeIllustration by ESPNWomen's Tournament ChallengeCheck your bracket(s) in the 2023 Women's Tournament Challenge, which you can follow throughout the Big Dance. Best of ESPN+AP Photo/Lynne SladkyFantasy Baseball ESPN+ Cheat Sheet: Sleepers, busts, rookies and closersYou've read their names all preseason long, it'd be a shame to forget them on draft day. The ESPN+ Cheat Sheet is one way to make sure that doesn't happen.Steph Chambers/Getty ImagesPassan's 2023 MLB season preview: Bold predictions and moreOpening Day is just over a week away -- and Jeff Passan has everything you need to know covered from every possible angle.Photo by Bob Kupbens/Icon Sportswire2023 NFL free agency: Best team fits for unsigned playersWhere could Ezekiel Elliott land? Let's match remaining free agents to teams and find fits for two trade candidates.Illustration by ESPN2023 NFL mock draft: Mel Kiper's first-round pick predictionsMel Kiper Jr. makes his predictions for Round 1 of the NFL draft, including projecting a trade in the top five. Trending NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points, milestonesThe B's are on pace for NHL records in wins and points, along with some individual superlatives as well. Follow along here with our updated tracker.Mandatory Credit: William Purnell-USA TODAY Sports2023 NFL full draft order: AFC, NFC team picks for all roundsStarting with the Carolina Panthers at No. 1 overall, here's the entire 2023 NFL draft broken down round by round. How to Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results, bracket, how to watchThe matchups in Tampa promise to be thrillers, featuring plenty of star power, high-octane offense and stellar defense.(AP Photo/Koji Sasahara, File)How to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN, ESPN+Here's everything you need to know about how to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN and ESPN+.Hailie Lynch/XFLHow to watch the XFL: 2023 schedule, teams, players, news, moreEvery XFL game will be streamed on ESPN+. Find out when and where else you can watch the eight teams compete. Sign up to play the #1 Fantasy Baseball GameReactivate A LeagueCreate A LeagueJoin a Public LeaguePractice With a Mock DraftSports BettingAP Photo/Mike KropfMarch Madness betting 2023: Bracket odds, lines, tips, moreThe 2023 NCAA tournament brackets have finally been released, and we have everything you need to know to make a bet on all of the March Madness games. Sign up to play the #1 Fantasy game!Create A LeagueJoin Public LeagueReactivateMock Draft Now\n\nESPN+\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\nESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nTerms of UsePrivacy PolicyYour US State Privacy RightsChildren's Online Privacy PolicyInterest-Based AdsAbout Nielsen MeasurementDo Not Sell or Share My Personal InformationContact UsDisney Ad Sales SiteWork for ESPNCopyright: © ESPN Enterprises, Inc. All rights reserved.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", lookup_str='', metadata={'source': 'https://www.espn.com/'}, lookup_index=0), Document(page_content='GoogleSearch Images Maps Play YouTube News Gmail Drive More »Web History | Settings | Sign in\xa0Advanced searchAdvertisingBusiness SolutionsAbout Google© 2023 - Privacy - Terms ', lookup_str='', metadata={'source': 'https://google.com'}, lookup_index=0)] ``` ## Loading a xml file, or using a different BeautifulSoup parser[​](#loading-a-xml-file-or-using-a-different-beautifulsoup-parser "Direct link to Loading a xml file, or using a different BeautifulSoup parser") You can also look at `SitemapLoader` for an example of how to load a sitemap file, which is an example of using this feature. ``` loader = WebBaseLoader( "https://www.govinfo.gov/content/pkg/CFR-2018-title10-vol3/xml/CFR-2018-title10-vol3-sec431-86.xml")loader.default_parser = "xml"docs = loader.load()docs ``` ``` [Document(page_content='\n\n10\nEnergy\n3\n2018-01-01\n2018-01-01\nfalse\nUniform test method for the measurement of energy efficiency of commercial packaged boilers.\n§ 431.86\nSection § 431.86\n\nEnergy\nDEPARTMENT OF ENERGY\nENERGY CONSERVATION\nENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT\nCommercial Packaged Boilers\nTest Procedures\n\n\n\n\n§\u2009431.86\nUniform test method for the measurement of energy efficiency of commercial packaged boilers.\n(a) Scope. This section provides test procedures, pursuant to the Energy Policy and Conservation Act (EPCA), as amended, which must be followed for measuring the combustion efficiency and/or thermal efficiency of a gas- or oil-fired commercial packaged boiler.\n(b) Testing and Calculations. Determine the thermal efficiency or combustion efficiency of commercial packaged boilers by conducting the appropriate test procedure(s) indicated in Table 1 of this section.\n\nTable 1—Test Requirements for Commercial Packaged Boiler Equipment Classes\n\nEquipment category\nSubcategory\nCertified rated inputBtu/h\n\nStandards efficiency metric(§\u2009431.87)\n\nTest procedure(corresponding to\nstandards efficiency\nmetric required\nby §\u2009431.87)\n\n\n\nHot Water\nGas-fired\n≥300,000 and ≤2,500,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\nHot Water\nGas-fired\n>2,500,000\nCombustion Efficiency\nAppendix A, Section 3.\n\n\nHot Water\nOil-fired\n≥300,000 and ≤2,500,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\nHot Water\nOil-fired\n>2,500,000\nCombustion Efficiency\nAppendix A, Section 3.\n\n\nSteam\nGas-fired (all*)\n≥300,000 and ≤2,500,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\nSteam\nGas-fired (all*)\n>2,500,000 and ≤5,000,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\n\u2003\n\n>5,000,000\nThermal Efficiency\nAppendix A, Section 2.OR\nAppendix A, Section 3 with Section 2.4.3.2.\n\n\n\nSteam\nOil-fired\n≥300,000 and ≤2,500,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\nSteam\nOil-fired\n>2,500,000 and ≤5,000,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\n\u2003\n\n>5,000,000\nThermal Efficiency\nAppendix A, Section 2.OR\nAppendix A, Section 3. with Section 2.4.3.2.\n\n\n\n*\u2009Equipment classes for commercial packaged boilers as of July 22, 2009 (74 FR 36355) distinguish between gas-fired natural draft and all other gas-fired (except natural draft).\n\n(c) Field Tests. The field test provisions of appendix A may be used only to test a unit of commercial packaged boiler with rated input greater than 5,000,000 Btu/h.\n[81 FR 89305, Dec. 9, 2016]\n\n\nEnergy Efficiency Standards\n\n', lookup_str='', metadata={'source': 'https://www.govinfo.gov/content/pkg/CFR-2018-title10-vol3/xml/CFR-2018-title10-vol3-sec431-86.xml'}, lookup_index=0)] ``` ## Using proxies[​](#using-proxies "Direct link to Using proxies") Sometimes you might need to use proxies to get around IP blocks. You can pass in a dictionary of proxies to the loader (and `requests` underneath) to use them. ``` loader = WebBaseLoader( "https://www.walmart.com/search?q=parrots", proxies={ "http": "http://{username}:{password}:@proxy.service.com:6666/", "https": "https://{username}:{password}:@proxy.service.com:6666/", },)docs = loader.load() ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:46.685Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/web_base/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/web_base/", "description": "This covers how to use WebBaseLoader to load all text from HTML", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "5882", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"web_base\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:45 GMT", "etag": "W/\"3181d01969c16b49cf51250c3f4411dd\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::54c7l-1713753585176-c7a8fd50d042" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/web_base/", "property": "og:url" }, { "content": "WebBaseLoader | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This covers how to use WebBaseLoader to load all text from HTML", "property": "og:description" } ], "title": "WebBaseLoader | 🦜️🔗 LangChain" }
WebBaseLoader This covers how to use WebBaseLoader to load all text from HTML webpages into a document format that we can use downstream. For more custom logic for loading webpages look at some child class examples such as IMSDbLoader, AZLyricsLoader, and CollegeConfidentialLoader. If you don’t want to worry about website crawling, bypassing JS-blocking sites, and data cleaning, consider using FireCrawlLoader. from langchain_community.document_loaders import WebBaseLoader loader = WebBaseLoader("https://www.espn.com/") To bypass SSL verification errors during fetching, you can set the “verify” option: loader.requests_kwargs = {‘verify’:False} [Document(page_content="\n\n\n\n\n\n\n\n\nESPN - Serving Sports Fans. Anytime. Anywhere.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Skip to main content\n \n\n Skip to navigation\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n<\n\n>\n\n\n\n\n\n\n\n\n\nMenuESPN\n\n\nSearch\n\n\n\nscores\n\n\n\nNFLNBANCAAMNCAAWNHLSoccer…MLBNCAAFGolfTennisSports BettingBoxingCFLNCAACricketF1HorseLLWSMMANASCARNBA G LeagueOlympic SportsRacingRN BBRN FBRugbyWNBAWorld Baseball ClassicWWEX GamesXFLMore ESPNFantasyListenWatchESPN+\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\nSUBSCRIBE NOW\n\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\n\n\n\n\nFavorites\n\n\n\n\n\n\n Manage Favorites\n \n\n\n\nCustomize ESPNSign UpLog InESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nAre you ready for Opening Day? Here's your guide to MLB's offseason chaosWait, Jacob deGrom is on the Rangers now? Xander Bogaerts and Trea Turner signed where? And what about Carlos Correa? Yeah, you're going to need to read up before Opening Day.12hESPNIllustration by ESPNEverything you missed in the MLB offseason3h2:33World Series odds, win totals, props for every teamPlay fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersLAMAR WANTS OUT OF BALTIMOREMarcus Spears identifies the two teams that need Lamar Jackson the most8h2:00Would Lamar sit out? Will Ravens draft a QB? Jackson trade request insightsLamar Jackson has asked Baltimore to trade him, but Ravens coach John Harbaugh hopes the QB will be back.3hJamison HensleyBallard, Colts will consider trading for QB JacksonJackson to Indy? Washington? Barnwell ranks the QB's trade fitsSNYDER'S TUMULTUOUS 24-YEAR RUNHow Washington’s NFL franchise sank on and off the field under owner Dan SnyderSnyder purchased one of the NFL's marquee franchises in 1999. Twenty-four years later, and with the team up for sale, he leaves a legacy of on-field futility and off-field scandal.13hJohn KeimESPNIOWA STAR STEPS UP AGAINJ-Will: Caitlin Clark is the biggest brand in college sports right now8h0:47'The better the opponent, the better she plays': Clark draws comparisons to TaurasiCaitlin Clark's performance on Sunday had longtime observers going back decades to find comparisons.16hKevin PeltonWOMEN'S ELITE EIGHT SCOREBOARDMONDAY'S GAMESCheck your bracket!NBA DRAFTHow top prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four teams, and ESPN's Jonathan Givony recaps the players who saw their NBA draft stock change.11hJonathan GivonyAndy Lyons/Getty ImagesTALKING BASKETBALLWhy AD needs to be more assertive with LeBron on the court10h1:33Why Perk won't blame Kyrie for Mavs' woes8h1:48WHERE EVERY TEAM STANDSNew NFL Power Rankings: Post-free-agency 1-32 poll, plus underrated offseason movesThe free agent frenzy has come and gone. Which teams have improved their 2023 outlook, and which teams have taken a hit?12hNFL Nation reportersIllustration by ESPNTHE BUCK STOPS WITH BELICHICKBruschi: Fair to criticize Bill Belichick for Patriots' struggles10h1:27 Top HeadlinesQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasy HomeCustomize ESPNSign UpLog InMarch Madness LiveESPNMarch Madness LiveWatch every men's NCAA tournament game live! ICYMI1:42Austin Peay's coach, pitcher and catcher all ejected after retaliation pitchAustin Peay's pitcher, catcher and coach were all ejected after a pitch was thrown at Liberty's Nathan Keeter, who earlier in the game hit a home run and celebrated while running down the third-base line. Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the 2023 Men's Tournament Challenge, which you can follow throughout the Big Dance. Women's Tournament ChallengeIllustration by ESPNWomen's Tournament ChallengeCheck your bracket(s) in the 2023 Women's Tournament Challenge, which you can follow throughout the Big Dance. Best of ESPN+AP Photo/Lynne SladkyFantasy Baseball ESPN+ Cheat Sheet: Sleepers, busts, rookies and closersYou've read their names all preseason long, it'd be a shame to forget them on draft day. The ESPN+ Cheat Sheet is one way to make sure that doesn't happen.Steph Chambers/Getty ImagesPassan's 2023 MLB season preview: Bold predictions and moreOpening Day is just over a week away -- and Jeff Passan has everything you need to know covered from every possible angle.Photo by Bob Kupbens/Icon Sportswire2023 NFL free agency: Best team fits for unsigned playersWhere could Ezekiel Elliott land? Let's match remaining free agents to teams and find fits for two trade candidates.Illustration by ESPN2023 NFL mock draft: Mel Kiper's first-round pick predictionsMel Kiper Jr. makes his predictions for Round 1 of the NFL draft, including projecting a trade in the top five. Trending NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points, milestonesThe B's are on pace for NHL records in wins and points, along with some individual superlatives as well. Follow along here with our updated tracker.Mandatory Credit: William Purnell-USA TODAY Sports2023 NFL full draft order: AFC, NFC team picks for all roundsStarting with the Carolina Panthers at No. 1 overall, here's the entire 2023 NFL draft broken down round by round. How to Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results, bracket, how to watchThe matchups in Tampa promise to be thrillers, featuring plenty of star power, high-octane offense and stellar defense.(AP Photo/Koji Sasahara, File)How to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN, ESPN+Here's everything you need to know about how to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN and ESPN+.Hailie Lynch/XFLHow to watch the XFL: 2023 schedule, teams, players, news, moreEvery XFL game will be streamed on ESPN+. Find out when and where else you can watch the eight teams compete. Sign up to play the #1 Fantasy Baseball GameReactivate A LeagueCreate A LeagueJoin a Public LeaguePractice With a Mock DraftSports BettingAP Photo/Mike KropfMarch Madness betting 2023: Bracket odds, lines, tips, moreThe 2023 NCAA tournament brackets have finally been released, and we have everything you need to know to make a bet on all of the March Madness games. Sign up to play the #1 Fantasy game!Create A LeagueJoin Public LeagueReactivateMock Draft Now\n\nESPN+\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\nESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nTerms of UsePrivacy PolicyYour US State Privacy RightsChildren's Online Privacy PolicyInterest-Based AdsAbout Nielsen MeasurementDo Not Sell or Share My Personal InformationContact UsDisney Ad Sales SiteWork for ESPNCopyright: © ESPN Enterprises, Inc. All rights reserved.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", lookup_str='', metadata={'source': 'https://www.espn.com/'}, lookup_index=0)] """ # Use this piece of code for testing new custom BeautifulSoup parsers import requests from bs4 import BeautifulSoup html_doc = requests.get("{INSERT_NEW_URL_HERE}") soup = BeautifulSoup(html_doc.text, 'html.parser') # Beautiful soup logic to be exported to langchain_community.document_loaders.webpage.py # Example: transcript = soup.select_one("td[class='scrtext']").text # BS4 documentation can be found here: https://www.crummy.com/software/BeautifulSoup/bs4/doc/ """ Loading multiple webpages​ You can also load multiple webpages at once by passing in a list of urls to the loader. This will return a list of documents in the same order as the urls passed in. loader = WebBaseLoader(["https://www.espn.com/", "https://google.com"]) docs = loader.load() docs [Document(page_content="\n\n\n\n\n\n\n\n\nESPN - Serving Sports Fans. Anytime. Anywhere.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Skip to main content\n \n\n Skip to navigation\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n<\n\n>\n\n\n\n\n\n\n\n\n\nMenuESPN\n\n\nSearch\n\n\n\nscores\n\n\n\nNFLNBANCAAMNCAAWNHLSoccer…MLBNCAAFGolfTennisSports BettingBoxingCFLNCAACricketF1HorseLLWSMMANASCARNBA G LeagueOlympic SportsRacingRN BBRN FBRugbyWNBAWorld Baseball ClassicWWEX GamesXFLMore ESPNFantasyListenWatchESPN+\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\nSUBSCRIBE NOW\n\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\n\n\n\n\nFavorites\n\n\n\n\n\n\n Manage Favorites\n \n\n\n\nCustomize ESPNSign UpLog InESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nAre you ready for Opening Day? Here's your guide to MLB's offseason chaosWait, Jacob deGrom is on the Rangers now? Xander Bogaerts and Trea Turner signed where? And what about Carlos Correa? Yeah, you're going to need to read up before Opening Day.12hESPNIllustration by ESPNEverything you missed in the MLB offseason3h2:33World Series odds, win totals, props for every teamPlay fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersLAMAR WANTS OUT OF BALTIMOREMarcus Spears identifies the two teams that need Lamar Jackson the most7h2:00Would Lamar sit out? Will Ravens draft a QB? Jackson trade request insightsLamar Jackson has asked Baltimore to trade him, but Ravens coach John Harbaugh hopes the QB will be back.3hJamison HensleyBallard, Colts will consider trading for QB JacksonJackson to Indy? Washington? Barnwell ranks the QB's trade fitsSNYDER'S TUMULTUOUS 24-YEAR RUNHow Washington’s NFL franchise sank on and off the field under owner Dan SnyderSnyder purchased one of the NFL's marquee franchises in 1999. Twenty-four years later, and with the team up for sale, he leaves a legacy of on-field futility and off-field scandal.13hJohn KeimESPNIOWA STAR STEPS UP AGAINJ-Will: Caitlin Clark is the biggest brand in college sports right now8h0:47'The better the opponent, the better she plays': Clark draws comparisons to TaurasiCaitlin Clark's performance on Sunday had longtime observers going back decades to find comparisons.16hKevin PeltonWOMEN'S ELITE EIGHT SCOREBOARDMONDAY'S GAMESCheck your bracket!NBA DRAFTHow top prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four teams, and ESPN's Jonathan Givony recaps the players who saw their NBA draft stock change.11hJonathan GivonyAndy Lyons/Getty ImagesTALKING BASKETBALLWhy AD needs to be more assertive with LeBron on the court9h1:33Why Perk won't blame Kyrie for Mavs' woes8h1:48WHERE EVERY TEAM STANDSNew NFL Power Rankings: Post-free-agency 1-32 poll, plus underrated offseason movesThe free agent frenzy has come and gone. Which teams have improved their 2023 outlook, and which teams have taken a hit?12hNFL Nation reportersIllustration by ESPNTHE BUCK STOPS WITH BELICHICKBruschi: Fair to criticize Bill Belichick for Patriots' struggles10h1:27 Top HeadlinesQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasy HomeCustomize ESPNSign UpLog InMarch Madness LiveESPNMarch Madness LiveWatch every men's NCAA tournament game live! ICYMI1:42Austin Peay's coach, pitcher and catcher all ejected after retaliation pitchAustin Peay's pitcher, catcher and coach were all ejected after a pitch was thrown at Liberty's Nathan Keeter, who earlier in the game hit a home run and celebrated while running down the third-base line. Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the 2023 Men's Tournament Challenge, which you can follow throughout the Big Dance. Women's Tournament ChallengeIllustration by ESPNWomen's Tournament ChallengeCheck your bracket(s) in the 2023 Women's Tournament Challenge, which you can follow throughout the Big Dance. Best of ESPN+AP Photo/Lynne SladkyFantasy Baseball ESPN+ Cheat Sheet: Sleepers, busts, rookies and closersYou've read their names all preseason long, it'd be a shame to forget them on draft day. The ESPN+ Cheat Sheet is one way to make sure that doesn't happen.Steph Chambers/Getty ImagesPassan's 2023 MLB season preview: Bold predictions and moreOpening Day is just over a week away -- and Jeff Passan has everything you need to know covered from every possible angle.Photo by Bob Kupbens/Icon Sportswire2023 NFL free agency: Best team fits for unsigned playersWhere could Ezekiel Elliott land? Let's match remaining free agents to teams and find fits for two trade candidates.Illustration by ESPN2023 NFL mock draft: Mel Kiper's first-round pick predictionsMel Kiper Jr. makes his predictions for Round 1 of the NFL draft, including projecting a trade in the top five. Trending NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points, milestonesThe B's are on pace for NHL records in wins and points, along with some individual superlatives as well. Follow along here with our updated tracker.Mandatory Credit: William Purnell-USA TODAY Sports2023 NFL full draft order: AFC, NFC team picks for all roundsStarting with the Carolina Panthers at No. 1 overall, here's the entire 2023 NFL draft broken down round by round. How to Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results, bracket, how to watchThe matchups in Tampa promise to be thrillers, featuring plenty of star power, high-octane offense and stellar defense.(AP Photo/Koji Sasahara, File)How to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN, ESPN+Here's everything you need to know about how to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN and ESPN+.Hailie Lynch/XFLHow to watch the XFL: 2023 schedule, teams, players, news, moreEvery XFL game will be streamed on ESPN+. Find out when and where else you can watch the eight teams compete. Sign up to play the #1 Fantasy Baseball GameReactivate A LeagueCreate A LeagueJoin a Public LeaguePractice With a Mock DraftSports BettingAP Photo/Mike KropfMarch Madness betting 2023: Bracket odds, lines, tips, moreThe 2023 NCAA tournament brackets have finally been released, and we have everything you need to know to make a bet on all of the March Madness games. Sign up to play the #1 Fantasy game!Create A LeagueJoin Public LeagueReactivateMock Draft Now\n\nESPN+\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\nESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nTerms of UsePrivacy PolicyYour US State Privacy RightsChildren's Online Privacy PolicyInterest-Based AdsAbout Nielsen MeasurementDo Not Sell or Share My Personal InformationContact UsDisney Ad Sales SiteWork for ESPNCopyright: © ESPN Enterprises, Inc. All rights reserved.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", lookup_str='', metadata={'source': 'https://www.espn.com/'}, lookup_index=0), Document(page_content='GoogleSearch Images Maps Play YouTube News Gmail Drive More »Web History | Settings | Sign in\xa0Advanced searchAdvertisingBusiness SolutionsAbout Google© 2023 - Privacy - Terms ', lookup_str='', metadata={'source': 'https://google.com'}, lookup_index=0)] Load multiple urls concurrently​ You can speed up the scraping process by scraping and parsing multiple urls concurrently. There are reasonable limits to concurrent requests, defaulting to 2 per second. If you aren’t concerned about being a good citizen, or you control the server you are scraping and don’t care about load, you can change the requests_per_second parameter to increase the max concurrent requests. Note, while this will speed up the scraping process, but may cause the server to block you. Be careful! %pip install --upgrade --quiet nest_asyncio # fixes a bug with asyncio and jupyter import nest_asyncio nest_asyncio.apply() Requirement already satisfied: nest_asyncio in /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages (1.5.6) loader = WebBaseLoader(["https://www.espn.com/", "https://google.com"]) loader.requests_per_second = 1 docs = loader.aload() docs [Document(page_content="\n\n\n\n\n\n\n\n\nESPN - Serving Sports Fans. Anytime. Anywhere.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Skip to main content\n \n\n Skip to navigation\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n<\n\n>\n\n\n\n\n\n\n\n\n\nMenuESPN\n\n\nSearch\n\n\n\nscores\n\n\n\nNFLNBANCAAMNCAAWNHLSoccer…MLBNCAAFGolfTennisSports BettingBoxingCFLNCAACricketF1HorseLLWSMMANASCARNBA G LeagueOlympic SportsRacingRN BBRN FBRugbyWNBAWorld Baseball ClassicWWEX GamesXFLMore ESPNFantasyListenWatchESPN+\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\nSUBSCRIBE NOW\n\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\n\n\n\n\nFavorites\n\n\n\n\n\n\n Manage Favorites\n \n\n\n\nCustomize ESPNSign UpLog InESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nAre you ready for Opening Day? Here's your guide to MLB's offseason chaosWait, Jacob deGrom is on the Rangers now? Xander Bogaerts and Trea Turner signed where? And what about Carlos Correa? Yeah, you're going to need to read up before Opening Day.12hESPNIllustration by ESPNEverything you missed in the MLB offseason3h2:33World Series odds, win totals, props for every teamPlay fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersLAMAR WANTS OUT OF BALTIMOREMarcus Spears identifies the two teams that need Lamar Jackson the most7h2:00Would Lamar sit out? Will Ravens draft a QB? Jackson trade request insightsLamar Jackson has asked Baltimore to trade him, but Ravens coach John Harbaugh hopes the QB will be back.3hJamison HensleyBallard, Colts will consider trading for QB JacksonJackson to Indy? Washington? Barnwell ranks the QB's trade fitsSNYDER'S TUMULTUOUS 24-YEAR RUNHow Washington’s NFL franchise sank on and off the field under owner Dan SnyderSnyder purchased one of the NFL's marquee franchises in 1999. Twenty-four years later, and with the team up for sale, he leaves a legacy of on-field futility and off-field scandal.13hJohn KeimESPNIOWA STAR STEPS UP AGAINJ-Will: Caitlin Clark is the biggest brand in college sports right now8h0:47'The better the opponent, the better she plays': Clark draws comparisons to TaurasiCaitlin Clark's performance on Sunday had longtime observers going back decades to find comparisons.16hKevin PeltonWOMEN'S ELITE EIGHT SCOREBOARDMONDAY'S GAMESCheck your bracket!NBA DRAFTHow top prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four teams, and ESPN's Jonathan Givony recaps the players who saw their NBA draft stock change.11hJonathan GivonyAndy Lyons/Getty ImagesTALKING BASKETBALLWhy AD needs to be more assertive with LeBron on the court9h1:33Why Perk won't blame Kyrie for Mavs' woes8h1:48WHERE EVERY TEAM STANDSNew NFL Power Rankings: Post-free-agency 1-32 poll, plus underrated offseason movesThe free agent frenzy has come and gone. Which teams have improved their 2023 outlook, and which teams have taken a hit?12hNFL Nation reportersIllustration by ESPNTHE BUCK STOPS WITH BELICHICKBruschi: Fair to criticize Bill Belichick for Patriots' struggles10h1:27 Top HeadlinesQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasy HomeCustomize ESPNSign UpLog InMarch Madness LiveESPNMarch Madness LiveWatch every men's NCAA tournament game live! ICYMI1:42Austin Peay's coach, pitcher and catcher all ejected after retaliation pitchAustin Peay's pitcher, catcher and coach were all ejected after a pitch was thrown at Liberty's Nathan Keeter, who earlier in the game hit a home run and celebrated while running down the third-base line. Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the 2023 Men's Tournament Challenge, which you can follow throughout the Big Dance. Women's Tournament ChallengeIllustration by ESPNWomen's Tournament ChallengeCheck your bracket(s) in the 2023 Women's Tournament Challenge, which you can follow throughout the Big Dance. Best of ESPN+AP Photo/Lynne SladkyFantasy Baseball ESPN+ Cheat Sheet: Sleepers, busts, rookies and closersYou've read their names all preseason long, it'd be a shame to forget them on draft day. The ESPN+ Cheat Sheet is one way to make sure that doesn't happen.Steph Chambers/Getty ImagesPassan's 2023 MLB season preview: Bold predictions and moreOpening Day is just over a week away -- and Jeff Passan has everything you need to know covered from every possible angle.Photo by Bob Kupbens/Icon Sportswire2023 NFL free agency: Best team fits for unsigned playersWhere could Ezekiel Elliott land? Let's match remaining free agents to teams and find fits for two trade candidates.Illustration by ESPN2023 NFL mock draft: Mel Kiper's first-round pick predictionsMel Kiper Jr. makes his predictions for Round 1 of the NFL draft, including projecting a trade in the top five. Trending NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points, milestonesThe B's are on pace for NHL records in wins and points, along with some individual superlatives as well. Follow along here with our updated tracker.Mandatory Credit: William Purnell-USA TODAY Sports2023 NFL full draft order: AFC, NFC team picks for all roundsStarting with the Carolina Panthers at No. 1 overall, here's the entire 2023 NFL draft broken down round by round. How to Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results, bracket, how to watchThe matchups in Tampa promise to be thrillers, featuring plenty of star power, high-octane offense and stellar defense.(AP Photo/Koji Sasahara, File)How to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN, ESPN+Here's everything you need to know about how to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN and ESPN+.Hailie Lynch/XFLHow to watch the XFL: 2023 schedule, teams, players, news, moreEvery XFL game will be streamed on ESPN+. Find out when and where else you can watch the eight teams compete. Sign up to play the #1 Fantasy Baseball GameReactivate A LeagueCreate A LeagueJoin a Public LeaguePractice With a Mock DraftSports BettingAP Photo/Mike KropfMarch Madness betting 2023: Bracket odds, lines, tips, moreThe 2023 NCAA tournament brackets have finally been released, and we have everything you need to know to make a bet on all of the March Madness games. Sign up to play the #1 Fantasy game!Create A LeagueJoin Public LeagueReactivateMock Draft Now\n\nESPN+\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\nESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nTerms of UsePrivacy PolicyYour US State Privacy RightsChildren's Online Privacy PolicyInterest-Based AdsAbout Nielsen MeasurementDo Not Sell or Share My Personal InformationContact UsDisney Ad Sales SiteWork for ESPNCopyright: © ESPN Enterprises, Inc. All rights reserved.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", lookup_str='', metadata={'source': 'https://www.espn.com/'}, lookup_index=0), Document(page_content='GoogleSearch Images Maps Play YouTube News Gmail Drive More »Web History | Settings | Sign in\xa0Advanced searchAdvertisingBusiness SolutionsAbout Google© 2023 - Privacy - Terms ', lookup_str='', metadata={'source': 'https://google.com'}, lookup_index=0)] Loading a xml file, or using a different BeautifulSoup parser​ You can also look at SitemapLoader for an example of how to load a sitemap file, which is an example of using this feature. loader = WebBaseLoader( "https://www.govinfo.gov/content/pkg/CFR-2018-title10-vol3/xml/CFR-2018-title10-vol3-sec431-86.xml" ) loader.default_parser = "xml" docs = loader.load() docs [Document(page_content='\n\n10\nEnergy\n3\n2018-01-01\n2018-01-01\nfalse\nUniform test method for the measurement of energy efficiency of commercial packaged boilers.\n§ 431.86\nSection § 431.86\n\nEnergy\nDEPARTMENT OF ENERGY\nENERGY CONSERVATION\nENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT\nCommercial Packaged Boilers\nTest Procedures\n\n\n\n\n§\u2009431.86\nUniform test method for the measurement of energy efficiency of commercial packaged boilers.\n(a) Scope. This section provides test procedures, pursuant to the Energy Policy and Conservation Act (EPCA), as amended, which must be followed for measuring the combustion efficiency and/or thermal efficiency of a gas- or oil-fired commercial packaged boiler.\n(b) Testing and Calculations. Determine the thermal efficiency or combustion efficiency of commercial packaged boilers by conducting the appropriate test procedure(s) indicated in Table 1 of this section.\n\nTable 1—Test Requirements for Commercial Packaged Boiler Equipment Classes\n\nEquipment category\nSubcategory\nCertified rated inputBtu/h\n\nStandards efficiency metric(§\u2009431.87)\n\nTest procedure(corresponding to\nstandards efficiency\nmetric required\nby §\u2009431.87)\n\n\n\nHot Water\nGas-fired\n≥300,000 and ≤2,500,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\nHot Water\nGas-fired\n>2,500,000\nCombustion Efficiency\nAppendix A, Section 3.\n\n\nHot Water\nOil-fired\n≥300,000 and ≤2,500,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\nHot Water\nOil-fired\n>2,500,000\nCombustion Efficiency\nAppendix A, Section 3.\n\n\nSteam\nGas-fired (all*)\n≥300,000 and ≤2,500,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\nSteam\nGas-fired (all*)\n>2,500,000 and ≤5,000,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\n\u2003\n\n>5,000,000\nThermal Efficiency\nAppendix A, Section 2.OR\nAppendix A, Section 3 with Section 2.4.3.2.\n\n\n\nSteam\nOil-fired\n≥300,000 and ≤2,500,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\nSteam\nOil-fired\n>2,500,000 and ≤5,000,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\n\u2003\n\n>5,000,000\nThermal Efficiency\nAppendix A, Section 2.OR\nAppendix A, Section 3. with Section 2.4.3.2.\n\n\n\n*\u2009Equipment classes for commercial packaged boilers as of July 22, 2009 (74 FR 36355) distinguish between gas-fired natural draft and all other gas-fired (except natural draft).\n\n(c) Field Tests. The field test provisions of appendix A may be used only to test a unit of commercial packaged boiler with rated input greater than 5,000,000 Btu/h.\n[81 FR 89305, Dec. 9, 2016]\n\n\nEnergy Efficiency Standards\n\n', lookup_str='', metadata={'source': 'https://www.govinfo.gov/content/pkg/CFR-2018-title10-vol3/xml/CFR-2018-title10-vol3-sec431-86.xml'}, lookup_index=0)] Using proxies​ Sometimes you might need to use proxies to get around IP blocks. You can pass in a dictionary of proxies to the loader (and requests underneath) to use them. loader = WebBaseLoader( "https://www.walmart.com/search?q=parrots", proxies={ "http": "http://{username}:{password}:@proxy.service.com:6666/", "https": "https://{username}:{password}:@proxy.service.com:6666/", }, ) docs = loader.load() Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/unstructured_file/
## Unstructured File This notebook covers how to use `Unstructured` package to load files of many types. `Unstructured` currently supports loading of text files, powerpoints, html, pdfs, images, and more. ``` # # Install package%pip install --upgrade --quiet "unstructured[all-docs]" ``` ``` # # Install other dependencies# # https://github.com/Unstructured-IO/unstructured/blob/main/docs/source/installing.rst# !brew install libmagic# !brew install poppler# !brew install tesseract# # If parsing xml / html documents:# !brew install libxml2# !brew install libxslt ``` ``` # import nltk# nltk.download('punkt') ``` ``` from langchain_community.document_loaders import UnstructuredFileLoader ``` ``` loader = UnstructuredFileLoader("./example_data/state_of_the_union.txt") ``` ``` docs[0].page_content[:400] ``` ``` 'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\n\nLast year COVID-19 kept us apart. This year we are finally together again.\n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.\n\nWith a duty to one another to the American people to the Constit' ``` ### Load list of files[​](#load-list-of-files "Direct link to Load list of files") ``` files = ["./example_data/whatsapp_chat.txt", "./example_data/layout-parser-paper.pdf"] ``` ``` loader = UnstructuredFileLoader(files) ``` ``` docs[0].page_content[:400] ``` ## Retain Elements[​](#retain-elements "Direct link to Retain Elements") Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode="elements"`. ``` loader = UnstructuredFileLoader( "./example_data/state_of_the_union.txt", mode="elements") ``` ``` [Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Last year COVID-19 kept us apart. This year we are finally together again.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='With a duty to one another to the American people to the Constitution.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='And with an unwavering resolve that freedom will always triumph over tyranny.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)] ``` ## Define a Partitioning Strategy[​](#define-a-partitioning-strategy "Direct link to Define a Partitioning Strategy") Unstructured document loader allow users to pass in a `strategy` parameter that lets `unstructured` know how to partition the document. Currently supported strategies are `"hi_res"` (the default) and `"fast"`. Hi res partitioning strategies are more accurate, but take longer to process. Fast strategies partition the document more quickly, but trade-off accuracy. Not all document types have separate hi res and fast partitioning strategies. For those document types, the `strategy` kwarg is ignored. In some cases, the high res strategy will fallback to fast if there is a dependency missing (i.e. a model for document partitioning). You can see how to apply a strategy to an `UnstructuredFileLoader` below. ``` from langchain_community.document_loaders import UnstructuredFileLoader ``` ``` loader = UnstructuredFileLoader( "layout-parser-paper-fast.pdf", strategy="fast", mode="elements") ``` ``` [Document(page_content='1', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='0', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'Title'}, lookup_index=0)] ``` ## PDF Example[​](#pdf-example "Direct link to PDF Example") Processing PDF documents works exactly the same way. Unstructured detects the file type and extracts the same types of elements. Modes of operation are - `single` all the text from all elements are combined into one (default) - `elements` maintain individual elements - `paged` texts from each page are only combined ``` !wget https://raw.githubusercontent.com/Unstructured-IO/unstructured/main/example-docs/layout-parser-paper.pdf -P "../../" ``` ``` loader = UnstructuredFileLoader( "./example_data/layout-parser-paper.pdf", mode="elements") ``` ``` [Document(page_content='LayoutParser : A Unified Toolkit for Deep Learning Based Document Image Analysis', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Zejiang Shen 1 ( (ea)\n ), Ruochen Zhang 2 , Melissa Dell 3 , Benjamin Charles Germain Lee 4 , Jacob Carlson 3 , and Weining Li 5', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Allen Institute for AI shannons@allenai.org', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Brown University ruochen zhang@brown.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Harvard University { melissadell,jacob carlson } @fas.harvard.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0)] ``` If you need to post process the `unstructured` elements after extraction, you can pass in a list of `str` -\> `str` functions to the `post_processors` kwarg when you instantiate the `UnstructuredFileLoader`. This applies to other Unstructured loaders as well. Below is an example. ``` from langchain_community.document_loaders import UnstructuredFileLoaderfrom unstructured.cleaners.core import clean_extra_whitespace ``` ``` loader = UnstructuredFileLoader( "./example_data/layout-parser-paper.pdf", mode="elements", post_processors=[clean_extra_whitespace],) ``` ``` [Document(page_content='LayoutParser: A Unified Toolkit for Deep Learning Based Document Image Analysis', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((157.62199999999999, 114.23496279999995), (157.62199999999999, 146.5141628), (457.7358962799999, 146.5141628), (457.7358962799999, 114.23496279999995)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'Title'}), Document(page_content='Zejiang Shen1 ((cid:0)), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain Lee4, Jacob Carlson3, and Weining Li5', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((134.809, 168.64029940800003), (134.809, 192.2517444), (480.5464199080001, 192.2517444), (480.5464199080001, 168.64029940800003)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'UncategorizedText'}), Document(page_content='1 Allen Institute for AI shannons@allenai.org 2 Brown University ruochen zhang@brown.edu 3 Harvard University {melissadell,jacob carlson}@fas.harvard.edu 4 University of Washington bcgl@cs.washington.edu 5 University of Waterloo w422li@uwaterloo.ca', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((207.23000000000002, 202.57205439999996), (207.23000000000002, 311.8195408), (408.12676, 311.8195408), (408.12676, 202.57205439999996)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'UncategorizedText'}), Document(page_content='1 2 0 2', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 213.36), (16.34, 253.36), (36.34, 253.36), (36.34, 213.36)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'UncategorizedText'}), Document(page_content='n u J', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 258.36), (16.34, 286.14), (36.34, 286.14), (36.34, 258.36)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'Title'})] ``` ## Unstructured API[​](#unstructured-api "Direct link to Unstructured API") If you want to get up and running with less set up, you can simply run `pip install unstructured` and use `UnstructuredAPIFileLoader` or `UnstructuredAPIFileIOLoader`. That will process your document using the hosted Unstructured API. You can generate a free Unstructured API key [here](https://www.unstructured.io/api-key/). The [Unstructured documentation](https://unstructured-io.github.io/unstructured/) page will have instructions on how to generate an API key once they’re available. Check out the instructions [here](https://github.com/Unstructured-IO/unstructured-api#dizzy-instructions-for-using-the-docker-image) if you’d like to self-host the Unstructured API or run it locally. ``` from langchain_community.document_loaders import UnstructuredAPIFileLoader ``` ``` filenames = ["example_data/fake.docx", "example_data/fake-email.eml"] ``` ``` loader = UnstructuredAPIFileLoader( file_path=filenames[0], api_key="FAKE_API_KEY",) ``` ``` docs = loader.load()docs[0] ``` ``` Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.docx'}) ``` You can also batch multiple files through the Unstructured API in a single API using `UnstructuredAPIFileLoader`. ``` loader = UnstructuredAPIFileLoader( file_path=filenames, api_key="FAKE_API_KEY",) ``` ``` docs = loader.load()docs[0] ``` ``` Document(page_content='Lorem ipsum dolor sit amet.\n\nThis is a test email to use for unit tests.\n\nImportant points:\n\nRoses are red\n\nViolets are blue', metadata={'source': ['example_data/fake.docx', 'example_data/fake-email.eml']}) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:47.097Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/unstructured_file/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/unstructured_file/", "description": "This notebook covers how to use Unstructured package to load files of", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4407", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"unstructured_file\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:46 GMT", "etag": "W/\"02cb3718d164387d410ead26cb71505c\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::pzcg6-1713753586194-55c8ac3d0410" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/unstructured_file/", "property": "og:url" }, { "content": "Unstructured File | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebook covers how to use Unstructured package to load files of", "property": "og:description" } ], "title": "Unstructured File | 🦜️🔗 LangChain" }
Unstructured File This notebook covers how to use Unstructured package to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more. # # Install package %pip install --upgrade --quiet "unstructured[all-docs]" # # Install other dependencies # # https://github.com/Unstructured-IO/unstructured/blob/main/docs/source/installing.rst # !brew install libmagic # !brew install poppler # !brew install tesseract # # If parsing xml / html documents: # !brew install libxml2 # !brew install libxslt # import nltk # nltk.download('punkt') from langchain_community.document_loaders import UnstructuredFileLoader loader = UnstructuredFileLoader("./example_data/state_of_the_union.txt") docs[0].page_content[:400] 'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\n\nLast year COVID-19 kept us apart. This year we are finally together again.\n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.\n\nWith a duty to one another to the American people to the Constit' Load list of files​ files = ["./example_data/whatsapp_chat.txt", "./example_data/layout-parser-paper.pdf"] loader = UnstructuredFileLoader(files) docs[0].page_content[:400] Retain Elements​ Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredFileLoader( "./example_data/state_of_the_union.txt", mode="elements" ) [Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Last year COVID-19 kept us apart. This year we are finally together again.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='With a duty to one another to the American people to the Constitution.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='And with an unwavering resolve that freedom will always triumph over tyranny.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)] Define a Partitioning Strategy​ Unstructured document loader allow users to pass in a strategy parameter that lets unstructured know how to partition the document. Currently supported strategies are "hi_res" (the default) and "fast". Hi res partitioning strategies are more accurate, but take longer to process. Fast strategies partition the document more quickly, but trade-off accuracy. Not all document types have separate hi res and fast partitioning strategies. For those document types, the strategy kwarg is ignored. In some cases, the high res strategy will fallback to fast if there is a dependency missing (i.e. a model for document partitioning). You can see how to apply a strategy to an UnstructuredFileLoader below. from langchain_community.document_loaders import UnstructuredFileLoader loader = UnstructuredFileLoader( "layout-parser-paper-fast.pdf", strategy="fast", mode="elements" ) [Document(page_content='1', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='0', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'Title'}, lookup_index=0)] PDF Example​ Processing PDF documents works exactly the same way. Unstructured detects the file type and extracts the same types of elements. Modes of operation are - single all the text from all elements are combined into one (default) - elements maintain individual elements - paged texts from each page are only combined !wget https://raw.githubusercontent.com/Unstructured-IO/unstructured/main/example-docs/layout-parser-paper.pdf -P "../../" loader = UnstructuredFileLoader( "./example_data/layout-parser-paper.pdf", mode="elements" ) [Document(page_content='LayoutParser : A Unified Toolkit for Deep Learning Based Document Image Analysis', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Zejiang Shen 1 ( (ea)\n ), Ruochen Zhang 2 , Melissa Dell 3 , Benjamin Charles Germain Lee 4 , Jacob Carlson 3 , and Weining Li 5', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Allen Institute for AI shannons@allenai.org', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Brown University ruochen zhang@brown.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Harvard University { melissadell,jacob carlson } @fas.harvard.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0)] If you need to post process the unstructured elements after extraction, you can pass in a list of str -> str functions to the post_processors kwarg when you instantiate the UnstructuredFileLoader. This applies to other Unstructured loaders as well. Below is an example. from langchain_community.document_loaders import UnstructuredFileLoader from unstructured.cleaners.core import clean_extra_whitespace loader = UnstructuredFileLoader( "./example_data/layout-parser-paper.pdf", mode="elements", post_processors=[clean_extra_whitespace], ) [Document(page_content='LayoutParser: A Unified Toolkit for Deep Learning Based Document Image Analysis', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((157.62199999999999, 114.23496279999995), (157.62199999999999, 146.5141628), (457.7358962799999, 146.5141628), (457.7358962799999, 114.23496279999995)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'Title'}), Document(page_content='Zejiang Shen1 ((cid:0)), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain Lee4, Jacob Carlson3, and Weining Li5', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((134.809, 168.64029940800003), (134.809, 192.2517444), (480.5464199080001, 192.2517444), (480.5464199080001, 168.64029940800003)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'UncategorizedText'}), Document(page_content='1 Allen Institute for AI shannons@allenai.org 2 Brown University ruochen zhang@brown.edu 3 Harvard University {melissadell,jacob carlson}@fas.harvard.edu 4 University of Washington bcgl@cs.washington.edu 5 University of Waterloo w422li@uwaterloo.ca', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((207.23000000000002, 202.57205439999996), (207.23000000000002, 311.8195408), (408.12676, 311.8195408), (408.12676, 202.57205439999996)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'UncategorizedText'}), Document(page_content='1 2 0 2', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 213.36), (16.34, 253.36), (36.34, 253.36), (36.34, 213.36)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'UncategorizedText'}), Document(page_content='n u J', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 258.36), (16.34, 286.14), (36.34, 286.14), (36.34, 258.36)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'Title'})] Unstructured API​ If you want to get up and running with less set up, you can simply run pip install unstructured and use UnstructuredAPIFileLoader or UnstructuredAPIFileIOLoader. That will process your document using the hosted Unstructured API. You can generate a free Unstructured API key here. The Unstructured documentation page will have instructions on how to generate an API key once they’re available. Check out the instructions here if you’d like to self-host the Unstructured API or run it locally. from langchain_community.document_loaders import UnstructuredAPIFileLoader filenames = ["example_data/fake.docx", "example_data/fake-email.eml"] loader = UnstructuredAPIFileLoader( file_path=filenames[0], api_key="FAKE_API_KEY", ) docs = loader.load() docs[0] Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.docx'}) You can also batch multiple files through the Unstructured API in a single API using UnstructuredAPIFileLoader. loader = UnstructuredAPIFileLoader( file_path=filenames, api_key="FAKE_API_KEY", ) docs = loader.load() docs[0] Document(page_content='Lorem ipsum dolor sit amet.\n\nThis is a test email to use for unit tests.\n\nImportant points:\n\nRoses are red\n\nViolets are blue', metadata={'source': ['example_data/fake.docx', 'example_data/fake-email.eml']})
https://python.langchain.com/docs/integrations/document_transformers/nuclia_transformer/
## Nuclia > [Nuclia](https://nuclia.com/) automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing. `Nuclia Understanding API` document transformer splits text into paragraphs and sentences, identifies entities, provides a summary of the text and generates embeddings for all the sentences. To use the Nuclia Understanding API, you need to have a Nuclia account. You can create one for free at [https://nuclia.cloud](https://nuclia.cloud/), and then [create a NUA key](https://docs.nuclia.dev/docs/docs/using/understanding/intro). from langchain\_community.document\_transformers.nuclia\_text\_transform import NucliaTextTransformer ``` %pip install --upgrade --quiet protobuf%pip install --upgrade --quiet nucliadb-protos ``` ``` import osos.environ["NUCLIA_ZONE"] = "<YOUR_ZONE>" # e.g. europe-1os.environ["NUCLIA_NUA_KEY"] = "<YOUR_API_KEY>" ``` To use the Nuclia document transformer, you need to instantiate a `NucliaUnderstandingAPI` tool with `enable_ml` set to `True`: ``` from langchain_community.tools.nuclia import NucliaUnderstandingAPInua = NucliaUnderstandingAPI(enable_ml=True) ``` The Nuclia document transformer must be called in async mode, so you need to use the `atransform_documents` method: ``` import asynciofrom langchain_community.document_transformers.nuclia_text_transform import ( NucliaTextTransformer,)from langchain_core.documents import Documentasync def process(): documents = [ Document(page_content="<TEXT 1>", metadata={}), Document(page_content="<TEXT 2>", metadata={}), Document(page_content="<TEXT 3>", metadata={}), ] nuclia_transformer = NucliaTextTransformer(nua) transformed_documents = await nuclia_transformer.atransform_documents(documents) print(transformed_documents)asyncio.run(process()) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:47.466Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_transformers/nuclia_transformer/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_transformers/nuclia_transformer/", "description": "Nuclia automatically indexes your unstructured", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3479", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"nuclia_transformer\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:47 GMT", "etag": "W/\"1cd7686961543dc7f5d4194359d98e55\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::nvjf2-1713753587102-ea8bcc2986a8" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_transformers/nuclia_transformer/", "property": "og:url" }, { "content": "Nuclia | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Nuclia automatically indexes your unstructured", "property": "og:description" } ], "title": "Nuclia | 🦜️🔗 LangChain" }
Nuclia Nuclia automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing. Nuclia Understanding API document transformer splits text into paragraphs and sentences, identifies entities, provides a summary of the text and generates embeddings for all the sentences. To use the Nuclia Understanding API, you need to have a Nuclia account. You can create one for free at https://nuclia.cloud, and then create a NUA key. from langchain_community.document_transformers.nuclia_text_transform import NucliaTextTransformer %pip install --upgrade --quiet protobuf %pip install --upgrade --quiet nucliadb-protos import os os.environ["NUCLIA_ZONE"] = "<YOUR_ZONE>" # e.g. europe-1 os.environ["NUCLIA_NUA_KEY"] = "<YOUR_API_KEY>" To use the Nuclia document transformer, you need to instantiate a NucliaUnderstandingAPI tool with enable_ml set to True: from langchain_community.tools.nuclia import NucliaUnderstandingAPI nua = NucliaUnderstandingAPI(enable_ml=True) The Nuclia document transformer must be called in async mode, so you need to use the atransform_documents method: import asyncio from langchain_community.document_transformers.nuclia_text_transform import ( NucliaTextTransformer, ) from langchain_core.documents import Document async def process(): documents = [ Document(page_content="<TEXT 1>", metadata={}), Document(page_content="<TEXT 2>", metadata={}), Document(page_content="<TEXT 3>", metadata={}), ] nuclia_transformer = NucliaTextTransformer(nua) transformed_documents = await nuclia_transformer.atransform_documents(documents) print(transformed_documents) asyncio.run(process())
https://python.langchain.com/docs/integrations/document_loaders/whatsapp_chat/
[WhatsApp](https://www.whatsapp.com/) (also called `WhatsApp Messenger`) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content. This notebook covers how to load data from the `WhatsApp Chats` into a format that can be ingested into LangChain. ``` from langchain_community.document_loaders import WhatsAppChatLoader ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:47.728Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/whatsapp_chat/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/whatsapp_chat/", "description": "WhatsApp (also called", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"whatsapp_chat\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:47 GMT", "etag": "W/\"4847dbeab548e8bfdcce6e2eb93536cf\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::cgwfs-1713753587621-935d4b0e5107" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/whatsapp_chat/", "property": "og:url" }, { "content": "WhatsApp Chat | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "WhatsApp (also called", "property": "og:description" } ], "title": "WhatsApp Chat | 🦜️🔗 LangChain" }
WhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content. This notebook covers how to load data from the WhatsApp Chats into a format that can be ingested into LangChain. from langchain_community.document_loaders import WhatsAppChatLoader
https://python.langchain.com/docs/integrations/document_loaders/wikipedia/
## Wikipedia > [Wikipedia](https://wikipedia.org/) is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. `Wikipedia` is the largest and most-read reference work in history. This notebook shows how to load wiki pages from `wikipedia.org` into the Document format that we use downstream. ## Installation[​](#installation "Direct link to Installation") First, you need to install `wikipedia` python package. ``` %pip install --upgrade --quiet wikipedia ``` ## Examples[​](#examples "Direct link to Examples") `WikipediaLoader` has these arguments: - `query`: free text which used to find documents in Wikipedia - optional `lang`: default=“en”. Use it to search in a specific language part of Wikipedia - optional `load_max_docs`: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now. - optional `load_all_available_meta`: default=False. By default only the most important fields downloaded: `Published` (date when document was published/last updated), `title`, `Summary`. If True, other fields also downloaded. ``` from langchain_community.document_loaders import WikipediaLoader ``` ``` docs = WikipediaLoader(query="HUNTER X HUNTER", load_max_docs=2).load()len(docs) ``` ``` docs[0].metadata # meta-information of the Document ``` ``` docs[0].page_content[:400] # a content of the Document ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:48.450Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/wikipedia/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/wikipedia/", "description": "Wikipedia is a multilingual free online", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3482", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"wikipedia\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:48 GMT", "etag": "W/\"9e34807070de2d2e99dca11e1f923e5f\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::jq5s4-1713753588380-d19ecaca7b5d" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/wikipedia/", "property": "og:url" }, { "content": "Wikipedia | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Wikipedia is a multilingual free online", "property": "og:description" } ], "title": "Wikipedia | 🦜️🔗 LangChain" }
Wikipedia Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history. This notebook shows how to load wiki pages from wikipedia.org into the Document format that we use downstream. Installation​ First, you need to install wikipedia python package. %pip install --upgrade --quiet wikipedia Examples​ WikipediaLoader has these arguments: - query: free text which used to find documents in Wikipedia - optional lang: default=“en”. Use it to search in a specific language part of Wikipedia - optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now. - optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), title, Summary. If True, other fields also downloaded. from langchain_community.document_loaders import WikipediaLoader docs = WikipediaLoader(query="HUNTER X HUNTER", load_max_docs=2).load() len(docs) docs[0].metadata # meta-information of the Document docs[0].page_content[:400] # a content of the Document
https://python.langchain.com/docs/integrations/document_transformers/openai_metadata_tagger/
## OpenAI metadata tagger It can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for a more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious. The `OpenAIMetadataTagger` document transformer automates this process by extracting metadata from each provided document according to a provided schema. It uses a configurable `OpenAI Functions`\-powered chain under the hood, so if you pass a custom LLM instance, it must be an `OpenAI` model with functions support. **Note:** This document transformer works best with complete documents, so it’s best to run it first with whole documents before doing any other splitting or processing! For example, let’s say you wanted to index a set of movie reviews. You could initialize the document transformer with a valid `JSON Schema` object as follows: ``` from langchain_community.document_transformers.openai_functions import ( create_metadata_tagger,)from langchain_core.documents import Documentfrom langchain_openai import ChatOpenAI ``` ``` schema = { "properties": { "movie_title": {"type": "string"}, "critic": {"type": "string"}, "tone": {"type": "string", "enum": ["positive", "negative"]}, "rating": { "type": "integer", "description": "The number of stars the critic rated the movie", }, }, "required": ["movie_title", "critic", "tone"],}# Must be an OpenAI model that supports functionsllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")document_transformer = create_metadata_tagger(metadata_schema=schema, llm=llm) ``` You can then simply pass the document transformer a list of documents, and it will extract metadata from the contents: ``` original_documents = [ Document( page_content="Review of The Bee Movie\nBy Roger Ebert\n\nThis is the greatest movie ever made. 4 out of 5 stars." ), Document( page_content="Review of The Godfather\nBy Anonymous\n\nThis movie was super boring. 1 out of 5 stars.", metadata={"reliable": False}, ),]enhanced_documents = document_transformer.transform_documents(original_documents) ``` ``` import jsonprint( *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n",) ``` ``` Review of The Bee MovieBy Roger EbertThis is the greatest movie ever made. 4 out of 5 stars.{"movie_title": "The Bee Movie", "critic": "Roger Ebert", "tone": "positive", "rating": 4}---------------Review of The GodfatherBy AnonymousThis movie was super boring. 1 out of 5 stars.{"movie_title": "The Godfather", "critic": "Anonymous", "tone": "negative", "rating": 1, "reliable": false} ``` The new documents can then be further processed by a text splitter before being loaded into a vector store. Extracted fields will not overwrite existing metadata. You can also initialize the document transformer with a Pydantic schema: ``` from typing import Literalfrom pydantic import BaseModel, Fieldclass Properties(BaseModel): movie_title: str critic: str tone: Literal["positive", "negative"] rating: int = Field(description="Rating out of 5 stars")document_transformer = create_metadata_tagger(Properties, llm)enhanced_documents = document_transformer.transform_documents(original_documents)print( *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n",) ``` ``` Review of The Bee MovieBy Roger EbertThis is the greatest movie ever made. 4 out of 5 stars.{"movie_title": "The Bee Movie", "critic": "Roger Ebert", "tone": "positive", "rating": 4}---------------Review of The GodfatherBy AnonymousThis movie was super boring. 1 out of 5 stars.{"movie_title": "The Godfather", "critic": "Anonymous", "tone": "negative", "rating": 1, "reliable": false} ``` ## Customization[​](#customization "Direct link to Customization") You can pass the underlying tagging chain the standard LLMChain arguments in the document transformer constructor. For example, if you wanted to ask the LLM to focus specific details in the input documents, or extract metadata in a certain style, you could pass in a custom prompt: ``` from langchain_core.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_template( """Extract relevant information from the following text.Anonymous critics are actually Roger Ebert.{input}""")document_transformer = create_metadata_tagger(schema, llm, prompt=prompt)enhanced_documents = document_transformer.transform_documents(original_documents)print( *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n",) ``` ``` Review of The Bee MovieBy Roger EbertThis is the greatest movie ever made. 4 out of 5 stars.{"movie_title": "The Bee Movie", "critic": "Roger Ebert", "tone": "positive", "rating": 4}---------------Review of The GodfatherBy AnonymousThis movie was super boring. 1 out of 5 stars.{"movie_title": "The Godfather", "critic": "Roger Ebert", "tone": "negative", "rating": 1, "reliable": false} ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:49.444Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_transformers/openai_metadata_tagger/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_transformers/openai_metadata_tagger/", "description": "It can often be useful to tag ingested documents with structured", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4411", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"openai_metadata_tagger\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:49 GMT", "etag": "W/\"40ed315fea89238bcab29078586875b9\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::6jz7h-1713753589333-12ed98849b4a" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_transformers/openai_metadata_tagger/", "property": "og:url" }, { "content": "OpenAI metadata tagger | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "It can often be useful to tag ingested documents with structured", "property": "og:description" } ], "title": "OpenAI metadata tagger | 🦜️🔗 LangChain" }
OpenAI metadata tagger It can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for a more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious. The OpenAIMetadataTagger document transformer automates this process by extracting metadata from each provided document according to a provided schema. It uses a configurable OpenAI Functions-powered chain under the hood, so if you pass a custom LLM instance, it must be an OpenAI model with functions support. Note: This document transformer works best with complete documents, so it’s best to run it first with whole documents before doing any other splitting or processing! For example, let’s say you wanted to index a set of movie reviews. You could initialize the document transformer with a valid JSON Schema object as follows: from langchain_community.document_transformers.openai_functions import ( create_metadata_tagger, ) from langchain_core.documents import Document from langchain_openai import ChatOpenAI schema = { "properties": { "movie_title": {"type": "string"}, "critic": {"type": "string"}, "tone": {"type": "string", "enum": ["positive", "negative"]}, "rating": { "type": "integer", "description": "The number of stars the critic rated the movie", }, }, "required": ["movie_title", "critic", "tone"], } # Must be an OpenAI model that supports functions llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613") document_transformer = create_metadata_tagger(metadata_schema=schema, llm=llm) You can then simply pass the document transformer a list of documents, and it will extract metadata from the contents: original_documents = [ Document( page_content="Review of The Bee Movie\nBy Roger Ebert\n\nThis is the greatest movie ever made. 4 out of 5 stars." ), Document( page_content="Review of The Godfather\nBy Anonymous\n\nThis movie was super boring. 1 out of 5 stars.", metadata={"reliable": False}, ), ] enhanced_documents = document_transformer.transform_documents(original_documents) import json print( *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n", ) Review of The Bee Movie By Roger Ebert This is the greatest movie ever made. 4 out of 5 stars. {"movie_title": "The Bee Movie", "critic": "Roger Ebert", "tone": "positive", "rating": 4} --------------- Review of The Godfather By Anonymous This movie was super boring. 1 out of 5 stars. {"movie_title": "The Godfather", "critic": "Anonymous", "tone": "negative", "rating": 1, "reliable": false} The new documents can then be further processed by a text splitter before being loaded into a vector store. Extracted fields will not overwrite existing metadata. You can also initialize the document transformer with a Pydantic schema: from typing import Literal from pydantic import BaseModel, Field class Properties(BaseModel): movie_title: str critic: str tone: Literal["positive", "negative"] rating: int = Field(description="Rating out of 5 stars") document_transformer = create_metadata_tagger(Properties, llm) enhanced_documents = document_transformer.transform_documents(original_documents) print( *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n", ) Review of The Bee Movie By Roger Ebert This is the greatest movie ever made. 4 out of 5 stars. {"movie_title": "The Bee Movie", "critic": "Roger Ebert", "tone": "positive", "rating": 4} --------------- Review of The Godfather By Anonymous This movie was super boring. 1 out of 5 stars. {"movie_title": "The Godfather", "critic": "Anonymous", "tone": "negative", "rating": 1, "reliable": false} Customization​ You can pass the underlying tagging chain the standard LLMChain arguments in the document transformer constructor. For example, if you wanted to ask the LLM to focus specific details in the input documents, or extract metadata in a certain style, you could pass in a custom prompt: from langchain_core.prompts import ChatPromptTemplate prompt = ChatPromptTemplate.from_template( """Extract relevant information from the following text. Anonymous critics are actually Roger Ebert. {input} """ ) document_transformer = create_metadata_tagger(schema, llm, prompt=prompt) enhanced_documents = document_transformer.transform_documents(original_documents) print( *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n", ) Review of The Bee Movie By Roger Ebert This is the greatest movie ever made. 4 out of 5 stars. {"movie_title": "The Bee Movie", "critic": "Roger Ebert", "tone": "positive", "rating": 4} --------------- Review of The Godfather By Anonymous This movie was super boring. 1 out of 5 stars. {"movie_title": "The Godfather", "critic": "Roger Ebert", "tone": "negative", "rating": 1, "reliable": false}
https://python.langchain.com/docs/integrations/document_transformers/openvino_rerank/
[OpenVINO™](https://github.com/openvinotoolkit/openvino) is an open-source toolkit for optimizing and deploying AI inference. The OpenVINO™ Runtime supports various hardware [devices](https://github.com/openvinotoolkit/openvino?tab=readme-ov-file#supported-hardware-matrix) including x86 and ARM CPUs, and Intel GPUs. It can help to boost deep learning performance in Computer Vision, Automatic Speech Recognition, Natural Language Processing and other common tasks. Hugging Face rerank model can be supported by OpenVINO through `OpenVINOReranker` class. If you have an Intel GPU, you can specify `model_kwargs={"device": "GPU"}` to run inference on it. Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs. ``` /home/ethan/intel/langchain_test/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdmFramework not specified. Using pt to export the model.Using the export variant default. Available variants are: - default: The default ONNX variant.Using framework PyTorch: 2.2.1+cu121/home/ethan/intel/langchain_test/lib/python3.10/site-packages/transformers/modeling_utils.py:4193: FutureWarning: `_is_quantized_training_enabled` is going to be deprecated in transformers 4.39.0. Please use `model.hf_quantizer.is_trainable` instead warnings.warn(Compiling the model to CPU ... ``` ``` INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, onnx, openvinoDocument 1:One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 73}----------------------------------------------------------------------------------------------------Document 2:Danielle says Heath was a fighter to the very end. He didn’t know how to stop fighting, and neither did she. Through her pain she found purpose to demand we do better. Tonight, Danielle—we are. The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. And tonight, I’m announcing we’re expanding eligibility to veterans suffering from nine respiratory cancers.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 88}----------------------------------------------------------------------------------------------------Document 3:The widow of Sergeant First Class Heath Robinson. He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. Stationed near Baghdad, just yards from burn pits the size of football fields. Heath’s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter. But cancer from prolonged exposure to burn pits ravaged Heath’s lungs and body. Danielle says Heath was a fighter to the very end.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 87}----------------------------------------------------------------------------------------------------Document 4:I’m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve. And fourth, let’s end cancer as we know it. This is personal to me and Jill, to Kamala, and to so many of you. Cancer is the #2 cause of death in America–second only to heart disease.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 89}----------------------------------------------------------------------------------------------------Document 5:Every Administration says they’ll do it, but we are actually doing it. We will buy American to make sure everything from the deck of an aircraft carrier to the steel on highway guardrails are made in America. But to compete for the best jobs of the future, we also need to level the playing field with China and other competitors.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 29}----------------------------------------------------------------------------------------------------Document 6:He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 2}----------------------------------------------------------------------------------------------------Document 7:As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.” It’s time. But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. Inflation is robbing them of the gains they might otherwise feel. I get it. That’s why my top priority is getting prices under control.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 35}----------------------------------------------------------------------------------------------------Document 8:But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. Vice President Harris and I ran for office with a new economic vision for America. Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up and the middle out, not from the top down.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 23}----------------------------------------------------------------------------------------------------Document 9:To all Americans, I will be honest with you, as I’ve always promised. A Russian dictator, invading a foreign country, has costs around the world. And I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. Tonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 14}----------------------------------------------------------------------------------------------------Document 10:The one thing all Americans agree on is that the tax system is not fair. We have to fix it. I’m not looking to punish anyone. But let’s make sure corporations and the wealthiest Americans start paying their fair share. Just last year, 55 Fortune 500 corporations earned $40 billion in profits and paid zero dollars in federal income tax. That’s simply not fair. That’s why I’ve proposed a 15% minimum tax rate for corporations.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 46}----------------------------------------------------------------------------------------------------Document 11:Joshua is here with us tonight. Yesterday was his birthday. Happy birthday, buddy. For Joshua, and for the 200,000 other young people with Type 1 diabetes, let’s cap the cost of insulin at $35 a month so everyone can afford it. Drug companies will still do very well. And while we’re at it let Medicare negotiate lower prices for prescription drugs, like the VA already does.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 41}----------------------------------------------------------------------------------------------------Document 12:As I’ve told Xi Jinping, it is never a good bet to bet against the American people. We’ll create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. And we’ll do it all to withstand the devastating effects of the climate crisis and promote environmental justice.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 26}----------------------------------------------------------------------------------------------------Document 13:As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 79}----------------------------------------------------------------------------------------------------Document 14:My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. Our troops in Iraq and Afghanistan faced many dangers. One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more. When they came home, many of the world’s fittest and best trained warriors were never the same. Headaches. Numbness. Dizziness.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 85}----------------------------------------------------------------------------------------------------Document 15:A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 74}----------------------------------------------------------------------------------------------------Document 16:I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. So let’s not abandon our streets. Or choose between safety and equal justice.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 67}----------------------------------------------------------------------------------------------------Document 17:We’ll build a national network of 500,000 electric vehicle charging stations, begin to replace poisonous lead pipes—so every child—and every American—has clean water to drink at home and at school, provide affordable high-speed internet for every American—urban, suburban, rural, and tribal communities. 4,000 projects have already been announced. And tonight, I’m announcing that this year we will start fixing over 65,000 miles of highway and 1,500 bridges in disrepair.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 27}----------------------------------------------------------------------------------------------------Document 18:Cancer is the #2 cause of death in America–second only to heart disease. Last month, I announced our plan to supercharge the Cancer Moonshot that President Obama asked me to lead six years ago. Our goal is to cut the cancer death rate by at least 50% over the next 25 years, turn more cancers from death sentences into treatable diseases. More support for patients and families. To get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 90}----------------------------------------------------------------------------------------------------Document 19:He will never extinguish their love of freedom. He will never weaken the resolve of the free world. We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. The pandemic has been punishing. And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. I understand.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 18}----------------------------------------------------------------------------------------------------Document 20:He and his Dad both have Type 1 diabetes, which means they need insulin every day. Insulin costs about $10 a vial to make. But drug companies charge families like Joshua and his Dad up to 30 times more. I spoke with Joshua’s mom. Imagine what it’s like to look at your child who needs insulin and have no idea how you’re going to pay for it. What it does to your dignity, your ability to look your child in the eye, to be the parent you expect to be.Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 40} ``` Now let’s wrap our base retriever with a `ContextualCompressionRetriever`, using `OpenVINOReranker` as a compressor. After reranking, the top 4 documents are different from the top 4 documents retrieved by the base retriever. ``` Document 1:One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Metadata: {'id': 0, 'relevance_score': tensor(0.6148)}----------------------------------------------------------------------------------------------------Document 2:He will never extinguish their love of freedom. He will never weaken the resolve of the free world. We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. The pandemic has been punishing. And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. I understand.Metadata: {'id': 16, 'relevance_score': tensor(0.0373)}----------------------------------------------------------------------------------------------------Document 3:A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.Metadata: {'id': 18, 'relevance_score': tensor(0.0131)}----------------------------------------------------------------------------------------------------Document 4:To all Americans, I will be honest with you, as I’ve always promised. A Russian dictator, invading a foreign country, has costs around the world. And I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. Tonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world.Metadata: {'id': 6, 'relevance_score': tensor(0.0098)} ``` It is possible to export your rerank model to the OpenVINO IR format with `OVModelForSequenceClassification`, and load the model from local folder. ``` Compiling the model to CPU ... ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:50.648Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_transformers/openvino_rerank/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_transformers/openvino_rerank/", "description": "OpenVINO™ is an", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4412", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"openvino_rerank\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:50 GMT", "etag": "W/\"03b46f090fd5dd217bfb73e209a7e840\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::zbscg-1713753590508-2ddb670b0882" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_transformers/openvino_rerank/", "property": "og:url" }, { "content": "OpenVINO Reranker | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "OpenVINO™ is an", "property": "og:description" } ], "title": "OpenVINO Reranker | 🦜️🔗 LangChain" }
OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference. The OpenVINO™ Runtime supports various hardware devices including x86 and ARM CPUs, and Intel GPUs. It can help to boost deep learning performance in Computer Vision, Automatic Speech Recognition, Natural Language Processing and other common tasks. Hugging Face rerank model can be supported by OpenVINO through OpenVINOReranker class. If you have an Intel GPU, you can specify model_kwargs={"device": "GPU"} to run inference on it. Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs. /home/ethan/intel/langchain_test/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm Framework not specified. Using pt to export the model. Using the export variant default. Available variants are: - default: The default ONNX variant. Using framework PyTorch: 2.2.1+cu121 /home/ethan/intel/langchain_test/lib/python3.10/site-packages/transformers/modeling_utils.py:4193: FutureWarning: `_is_quantized_training_enabled` is going to be deprecated in transformers 4.39.0. Please use `model.hf_quantizer.is_trainable` instead warnings.warn( Compiling the model to CPU ... INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, onnx, openvino Document 1: One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 73} ---------------------------------------------------------------------------------------------------- Document 2: Danielle says Heath was a fighter to the very end. He didn’t know how to stop fighting, and neither did she. Through her pain she found purpose to demand we do better. Tonight, Danielle—we are. The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. And tonight, I’m announcing we’re expanding eligibility to veterans suffering from nine respiratory cancers. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 88} ---------------------------------------------------------------------------------------------------- Document 3: The widow of Sergeant First Class Heath Robinson. He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. Stationed near Baghdad, just yards from burn pits the size of football fields. Heath’s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter. But cancer from prolonged exposure to burn pits ravaged Heath’s lungs and body. Danielle says Heath was a fighter to the very end. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 87} ---------------------------------------------------------------------------------------------------- Document 4: I’m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve. And fourth, let’s end cancer as we know it. This is personal to me and Jill, to Kamala, and to so many of you. Cancer is the #2 cause of death in America–second only to heart disease. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 89} ---------------------------------------------------------------------------------------------------- Document 5: Every Administration says they’ll do it, but we are actually doing it. We will buy American to make sure everything from the deck of an aircraft carrier to the steel on highway guardrails are made in America. But to compete for the best jobs of the future, we also need to level the playing field with China and other competitors. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 29} ---------------------------------------------------------------------------------------------------- Document 6: He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 2} ---------------------------------------------------------------------------------------------------- Document 7: As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.” It’s time. But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. Inflation is robbing them of the gains they might otherwise feel. I get it. That’s why my top priority is getting prices under control. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 35} ---------------------------------------------------------------------------------------------------- Document 8: But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. Vice President Harris and I ran for office with a new economic vision for America. Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up and the middle out, not from the top down. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 23} ---------------------------------------------------------------------------------------------------- Document 9: To all Americans, I will be honest with you, as I’ve always promised. A Russian dictator, invading a foreign country, has costs around the world. And I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. Tonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 14} ---------------------------------------------------------------------------------------------------- Document 10: The one thing all Americans agree on is that the tax system is not fair. We have to fix it. I’m not looking to punish anyone. But let’s make sure corporations and the wealthiest Americans start paying their fair share. Just last year, 55 Fortune 500 corporations earned $40 billion in profits and paid zero dollars in federal income tax. That’s simply not fair. That’s why I’ve proposed a 15% minimum tax rate for corporations. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 46} ---------------------------------------------------------------------------------------------------- Document 11: Joshua is here with us tonight. Yesterday was his birthday. Happy birthday, buddy. For Joshua, and for the 200,000 other young people with Type 1 diabetes, let’s cap the cost of insulin at $35 a month so everyone can afford it. Drug companies will still do very well. And while we’re at it let Medicare negotiate lower prices for prescription drugs, like the VA already does. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 41} ---------------------------------------------------------------------------------------------------- Document 12: As I’ve told Xi Jinping, it is never a good bet to bet against the American people. We’ll create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. And we’ll do it all to withstand the devastating effects of the climate crisis and promote environmental justice. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 26} ---------------------------------------------------------------------------------------------------- Document 13: As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 79} ---------------------------------------------------------------------------------------------------- Document 14: My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. Our troops in Iraq and Afghanistan faced many dangers. One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more. When they came home, many of the world’s fittest and best trained warriors were never the same. Headaches. Numbness. Dizziness. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 85} ---------------------------------------------------------------------------------------------------- Document 15: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 74} ---------------------------------------------------------------------------------------------------- Document 16: I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. So let’s not abandon our streets. Or choose between safety and equal justice. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 67} ---------------------------------------------------------------------------------------------------- Document 17: We’ll build a national network of 500,000 electric vehicle charging stations, begin to replace poisonous lead pipes—so every child—and every American—has clean water to drink at home and at school, provide affordable high-speed internet for every American—urban, suburban, rural, and tribal communities. 4,000 projects have already been announced. And tonight, I’m announcing that this year we will start fixing over 65,000 miles of highway and 1,500 bridges in disrepair. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 27} ---------------------------------------------------------------------------------------------------- Document 18: Cancer is the #2 cause of death in America–second only to heart disease. Last month, I announced our plan to supercharge the Cancer Moonshot that President Obama asked me to lead six years ago. Our goal is to cut the cancer death rate by at least 50% over the next 25 years, turn more cancers from death sentences into treatable diseases. More support for patients and families. To get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 90} ---------------------------------------------------------------------------------------------------- Document 19: He will never extinguish their love of freedom. He will never weaken the resolve of the free world. We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. The pandemic has been punishing. And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. I understand. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 18} ---------------------------------------------------------------------------------------------------- Document 20: He and his Dad both have Type 1 diabetes, which means they need insulin every day. Insulin costs about $10 a vial to make. But drug companies charge families like Joshua and his Dad up to 30 times more. I spoke with Joshua’s mom. Imagine what it’s like to look at your child who needs insulin and have no idea how you’re going to pay for it. What it does to your dignity, your ability to look your child in the eye, to be the parent you expect to be. Metadata: {'source': '../../modules/state_of_the_union.txt', 'id': 40} Now let’s wrap our base retriever with a ContextualCompressionRetriever, using OpenVINOReranker as a compressor. After reranking, the top 4 documents are different from the top 4 documents retrieved by the base retriever. Document 1: One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Metadata: {'id': 0, 'relevance_score': tensor(0.6148)} ---------------------------------------------------------------------------------------------------- Document 2: He will never extinguish their love of freedom. He will never weaken the resolve of the free world. We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. The pandemic has been punishing. And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. I understand. Metadata: {'id': 16, 'relevance_score': tensor(0.0373)} ---------------------------------------------------------------------------------------------------- Document 3: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. Metadata: {'id': 18, 'relevance_score': tensor(0.0131)} ---------------------------------------------------------------------------------------------------- Document 4: To all Americans, I will be honest with you, as I’ve always promised. A Russian dictator, invading a foreign country, has costs around the world. And I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. Tonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. Metadata: {'id': 6, 'relevance_score': tensor(0.0098)} It is possible to export your rerank model to the OpenVINO IR format with OVModelForSequenceClassification, and load the model from local folder. Compiling the model to CPU ...
https://python.langchain.com/docs/integrations/document_loaders/xml/
## XML The `UnstructuredXMLLoader` is used to load `XML` files. The loader works with `.xml` files. The page content will be the text extracted from the XML tags. ``` from langchain_community.document_loaders import UnstructuredXMLLoader ``` ``` loader = UnstructuredXMLLoader( "example_data/factbook.xml",)docs = loader.load()docs[0] ``` ``` Document(page_content='United States\n\nWashington, DC\n\nJoe Biden\n\nBaseball\n\nCanada\n\nOttawa\n\nJustin Trudeau\n\nHockey\n\nFrance\n\nParis\n\nEmmanuel Macron\n\nSoccer\n\nTrinidad & Tobado\n\nPort of Spain\n\nKeith Rowley\n\nTrack & Field', metadata={'source': 'example_data/factbook.xml'}) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:51.339Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/xml/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/xml/", "description": "The UnstructuredXMLLoader is used to load XML files. The loader", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3485", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"xml\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:51 GMT", "etag": "W/\"9e9151d0f1b9d6eee140e53089b3148d\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::86l5f-1713753591288-cc47c2cacf60" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/xml/", "property": "og:url" }, { "content": "XML | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "The UnstructuredXMLLoader is used to load XML files. The loader", "property": "og:description" } ], "title": "XML | 🦜️🔗 LangChain" }
XML The UnstructuredXMLLoader is used to load XML files. The loader works with .xml files. The page content will be the text extracted from the XML tags. from langchain_community.document_loaders import UnstructuredXMLLoader loader = UnstructuredXMLLoader( "example_data/factbook.xml", ) docs = loader.load() docs[0] Document(page_content='United States\n\nWashington, DC\n\nJoe Biden\n\nBaseball\n\nCanada\n\nOttawa\n\nJustin Trudeau\n\nHockey\n\nFrance\n\nParis\n\nEmmanuel Macron\n\nSoccer\n\nTrinidad & Tobado\n\nPort of Spain\n\nKeith Rowley\n\nTrack & Field', metadata={'source': 'example_data/factbook.xml'}) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_loaders/xorbits/
``` 0%| | 0.00/100 [00:00<?, ?it/s] ``` ``` 0%| | 0.00/100 [00:00<?, ?it/s] ``` ``` [Document(page_content='Nationals', metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}), Document(page_content='Reds', metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}), Document(page_content='Giants', metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}), Document(page_content='Braves', metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}), Document(page_content='Orioles', metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}), Document(page_content='Rays', metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}), Document(page_content='Angels', metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}), Document(page_content='Padres', metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}), Document(page_content='Mariners', metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}), Document(page_content='Mets', metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}), Document(page_content='Royals', metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}), Document(page_content='Indians', metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}), Document(page_content='Twins', metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}), Document(page_content='Astros', metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55})] ``` ``` 0%| | 0.00/100 [00:00<?, ?it/s] ``` ``` page_content='Nationals' metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}page_content='Reds' metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}page_content='Yankees' metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}page_content='Giants' metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}page_content='Braves' metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}page_content='Athletics' metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}page_content='Rangers' metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}page_content='Orioles' metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}page_content='Rays' metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}page_content='Angels' metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}page_content='Tigers' metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}page_content='Cardinals' metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}page_content='Dodgers' metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}page_content='White Sox' metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}page_content='Brewers' metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}page_content='Phillies' metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}page_content='Diamondbacks' metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}page_content='Pirates' metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}page_content='Padres' metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}page_content='Mariners' metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}page_content='Mets' metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}page_content='Blue Jays' metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}page_content='Royals' metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}page_content='Marlins' metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}page_content='Red Sox' metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}page_content='Indians' metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}page_content='Twins' metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}page_content='Rockies' metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}page_content='Cubs' metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}page_content='Astros' metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55} ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:52.119Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/xorbits/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/xorbits/", "description": "This notebook goes over how to load data from a", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"xorbits\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:52 GMT", "etag": "W/\"edee285651d87d4ee35118fe67928763\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::h6wxh-1713753591972-01c05d2617dc" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/xorbits/", "property": "og:url" }, { "content": "Xorbits Pandas DataFrame | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebook goes over how to load data from a", "property": "og:description" } ], "title": "Xorbits Pandas DataFrame | 🦜️🔗 LangChain" }
0%| | 0.00/100 [00:00<?, ?it/s] 0%| | 0.00/100 [00:00<?, ?it/s] [Document(page_content='Nationals', metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}), Document(page_content='Reds', metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}), Document(page_content='Giants', metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}), Document(page_content='Braves', metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}), Document(page_content='Orioles', metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}), Document(page_content='Rays', metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}), Document(page_content='Angels', metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}), Document(page_content='Padres', metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}), Document(page_content='Mariners', metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}), Document(page_content='Mets', metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}), Document(page_content='Royals', metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}), Document(page_content='Indians', metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}), Document(page_content='Twins', metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}), Document(page_content='Astros', metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55})] 0%| | 0.00/100 [00:00<?, ?it/s] page_content='Nationals' metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98} page_content='Reds' metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97} page_content='Yankees' metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95} page_content='Giants' metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94} page_content='Braves' metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94} page_content='Athletics' metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94} page_content='Rangers' metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93} page_content='Orioles' metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93} page_content='Rays' metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90} page_content='Angels' metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89} page_content='Tigers' metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88} page_content='Cardinals' metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88} page_content='Dodgers' metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86} page_content='White Sox' metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85} page_content='Brewers' metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83} page_content='Phillies' metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81} page_content='Diamondbacks' metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81} page_content='Pirates' metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79} page_content='Padres' metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76} page_content='Mariners' metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75} page_content='Mets' metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74} page_content='Blue Jays' metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73} page_content='Royals' metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72} page_content='Marlins' metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69} page_content='Red Sox' metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69} page_content='Indians' metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68} page_content='Twins' metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66} page_content='Rockies' metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64} page_content='Cubs' metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61} page_content='Astros' metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55}
https://python.langchain.com/docs/integrations/document_loaders/youtube_audio/
## YouTube audio Building chat or QA applications on YouTube videos is a topic of high interest. Below we show how to easily go from a `YouTube url` to `audio of the video` to `text` to `chat`! We wil use the `OpenAIWhisperParser`, which will use the OpenAI Whisper API to transcribe audio to text, and the `OpenAIWhisperParserLocal` for local support and running on private clouds or on premise. Note: You will need to have an `OPENAI_API_KEY` supplied. ``` from langchain_community.document_loaders.blob_loaders.youtube_audio import ( YoutubeAudioLoader,)from langchain_community.document_loaders.generic import GenericLoaderfrom langchain_community.document_loaders.parsers import ( OpenAIWhisperParser, OpenAIWhisperParserLocal,) ``` We will use `yt_dlp` to download audio for YouTube urls. We will use `pydub` to split downloaded audio files (such that we adhere to Whisper API’s 25MB file size limit). ``` %pip install --upgrade --quiet yt_dlp%pip install --upgrade --quiet pydub%pip install --upgrade --quiet librosa ``` ### YouTube url to text[​](#youtube-url-to-text "Direct link to YouTube url to text") Use `YoutubeAudioLoader` to fetch / download the audio files. Then, ues `OpenAIWhisperParser()` to transcribe them to text. Let’s take the first lecture of Andrej Karpathy’s YouTube course as an example! ``` # set a flag to switch between local and remote parsing# change this to True if you want to use local parsinglocal = False ``` ``` # Two Karpathy lecture videosurls = ["https://youtu.be/kCc8FmEb1nY", "https://youtu.be/VMj-3S1tku0"]# Directory to save audio filessave_dir = "~/Downloads/YouTube"# Transcribe the videos to textif local: loader = GenericLoader( YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParserLocal() )else: loader = GenericLoader(YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParser())docs = loader.load() ``` ``` [youtube] Extracting URL: https://youtu.be/kCc8FmEb1nY[youtube] kCc8FmEb1nY: Downloading webpage[youtube] kCc8FmEb1nY: Downloading android player API JSON[info] kCc8FmEb1nY: Downloading 1 format(s): 140[dashsegments] Total fragments: 11[download] Destination: /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/Let's build GPT: from scratch, in code, spelled out..m4a[download] 100% of 107.73MiB in 00:00:18 at 5.92MiB/s [FixupM4a] Correcting container of "/Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/Let's build GPT: from scratch, in code, spelled out..m4a"[ExtractAudio] Not converting audio /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/Let's build GPT: from scratch, in code, spelled out..m4a; file is already in target format m4a[youtube] Extracting URL: https://youtu.be/VMj-3S1tku0[youtube] VMj-3S1tku0: Downloading webpage[youtube] VMj-3S1tku0: Downloading android player API JSON[info] VMj-3S1tku0: Downloading 1 format(s): 140[download] /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/The spelled-out intro to neural networks and backpropagation: building micrograd.m4a has already been downloaded[download] 100% of 134.98MiB[ExtractAudio] Not converting audio /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/The spelled-out intro to neural networks and backpropagation: building micrograd.m4a; file is already in target format m4a ``` ``` # Returns a list of Documents, which can be easily viewed or parseddocs[0].page_content[0:500] ``` ``` "Hello, my name is Andrej and I've been training deep neural networks for a bit more than a decade. And in this lecture I'd like to show you what neural network training looks like under the hood. So in particular we are going to start with a blank Jupyter notebook and by the end of this lecture we will define and train a neural net and you'll get to see everything that goes on under the hood and exactly sort of how that works on an intuitive level. Now specifically what I would like to do is I w" ``` ### Building a chat app from YouTube video[​](#building-a-chat-app-from-youtube-video "Direct link to Building a chat app from YouTube video") Given `Documents`, we can easily enable chat / question+answering. ``` from langchain.chains import RetrievalQAfrom langchain_community.vectorstores import FAISSfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplitter ``` ``` # Combine doccombined_docs = [doc.page_content for doc in docs]text = " ".join(combined_docs) ``` ``` # Split themtext_splitter = RecursiveCharacterTextSplitter(chunk_size=1500, chunk_overlap=150)splits = text_splitter.split_text(text) ``` ``` # Build an indexembeddings = OpenAIEmbeddings()vectordb = FAISS.from_texts(splits, embeddings) ``` ``` # Build a QA chainqa_chain = RetrievalQA.from_chain_type( llm=ChatOpenAI(model="gpt-3.5-turbo", temperature=0), chain_type="stuff", retriever=vectordb.as_retriever(),) ``` ``` # Ask a question!query = "Why do we need to zero out the gradient before backprop at each step?"qa_chain.run(query) ``` ``` "We need to zero out the gradient before backprop at each step because the backward pass accumulates gradients in the grad attribute of each parameter. If we don't reset the grad to zero before each backward pass, the gradients will accumulate and add up, leading to incorrect updates and slower convergence. By resetting the grad to zero before each backward pass, we ensure that the gradients are calculated correctly and that the optimization process works as intended." ``` ``` query = "What is the difference between an encoder and decoder?"qa_chain.run(query) ``` ``` 'In the context of transformers, an encoder is a component that reads in a sequence of input tokens and generates a sequence of hidden representations. On the other hand, a decoder is a component that takes in a sequence of hidden representations and generates a sequence of output tokens. The main difference between the two is that the encoder is used to encode the input sequence into a fixed-length representation, while the decoder is used to decode the fixed-length representation into an output sequence. In machine translation, for example, the encoder reads in the source language sentence and generates a fixed-length representation, which is then used by the decoder to generate the target language sentence.' ``` ``` query = "For any token, what are x, k, v, and q?"qa_chain.run(query) ``` ``` 'For any token, x is the input vector that contains the private information of that token, k and q are the key and query vectors respectively, which are produced by forwarding linear modules on x, and v is the vector that is calculated by propagating the same linear module on x again. The key vector represents what the token contains, and the query vector represents what the token is looking for. The vector v is the information that the token will communicate to other tokens if it finds them interesting, and it gets aggregated for the purposes of the self-attention mechanism.' ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:52.751Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/youtube_audio/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/youtube_audio/", "description": "Building chat or QA applications on YouTube videos is a topic of high", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3486", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"youtube_audio\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:52 GMT", "etag": "W/\"93ac28ce728f12d135ac0312d57ae0fa\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::5n47r-1713753592687-821afc359e86" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/youtube_audio/", "property": "og:url" }, { "content": "YouTube audio | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Building chat or QA applications on YouTube videos is a topic of high", "property": "og:description" } ], "title": "YouTube audio | 🦜️🔗 LangChain" }
YouTube audio Building chat or QA applications on YouTube videos is a topic of high interest. Below we show how to easily go from a YouTube url to audio of the video to text to chat! We wil use the OpenAIWhisperParser, which will use the OpenAI Whisper API to transcribe audio to text, and the OpenAIWhisperParserLocal for local support and running on private clouds or on premise. Note: You will need to have an OPENAI_API_KEY supplied. from langchain_community.document_loaders.blob_loaders.youtube_audio import ( YoutubeAudioLoader, ) from langchain_community.document_loaders.generic import GenericLoader from langchain_community.document_loaders.parsers import ( OpenAIWhisperParser, OpenAIWhisperParserLocal, ) We will use yt_dlp to download audio for YouTube urls. We will use pydub to split downloaded audio files (such that we adhere to Whisper API’s 25MB file size limit). %pip install --upgrade --quiet yt_dlp %pip install --upgrade --quiet pydub %pip install --upgrade --quiet librosa YouTube url to text​ Use YoutubeAudioLoader to fetch / download the audio files. Then, ues OpenAIWhisperParser() to transcribe them to text. Let’s take the first lecture of Andrej Karpathy’s YouTube course as an example! # set a flag to switch between local and remote parsing # change this to True if you want to use local parsing local = False # Two Karpathy lecture videos urls = ["https://youtu.be/kCc8FmEb1nY", "https://youtu.be/VMj-3S1tku0"] # Directory to save audio files save_dir = "~/Downloads/YouTube" # Transcribe the videos to text if local: loader = GenericLoader( YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParserLocal() ) else: loader = GenericLoader(YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParser()) docs = loader.load() [youtube] Extracting URL: https://youtu.be/kCc8FmEb1nY [youtube] kCc8FmEb1nY: Downloading webpage [youtube] kCc8FmEb1nY: Downloading android player API JSON [info] kCc8FmEb1nY: Downloading 1 format(s): 140 [dashsegments] Total fragments: 11 [download] Destination: /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/Let's build GPT: from scratch, in code, spelled out..m4a [download] 100% of 107.73MiB in 00:00:18 at 5.92MiB/s [FixupM4a] Correcting container of "/Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/Let's build GPT: from scratch, in code, spelled out..m4a" [ExtractAudio] Not converting audio /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/Let's build GPT: from scratch, in code, spelled out..m4a; file is already in target format m4a [youtube] Extracting URL: https://youtu.be/VMj-3S1tku0 [youtube] VMj-3S1tku0: Downloading webpage [youtube] VMj-3S1tku0: Downloading android player API JSON [info] VMj-3S1tku0: Downloading 1 format(s): 140 [download] /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/The spelled-out intro to neural networks and backpropagation: building micrograd.m4a has already been downloaded [download] 100% of 134.98MiB [ExtractAudio] Not converting audio /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/The spelled-out intro to neural networks and backpropagation: building micrograd.m4a; file is already in target format m4a # Returns a list of Documents, which can be easily viewed or parsed docs[0].page_content[0:500] "Hello, my name is Andrej and I've been training deep neural networks for a bit more than a decade. And in this lecture I'd like to show you what neural network training looks like under the hood. So in particular we are going to start with a blank Jupyter notebook and by the end of this lecture we will define and train a neural net and you'll get to see everything that goes on under the hood and exactly sort of how that works on an intuitive level. Now specifically what I would like to do is I w" Building a chat app from YouTube video​ Given Documents, we can easily enable chat / question+answering. from langchain.chains import RetrievalQA from langchain_community.vectorstores import FAISS from langchain_openai import ChatOpenAI, OpenAIEmbeddings from langchain_text_splitters import RecursiveCharacterTextSplitter # Combine doc combined_docs = [doc.page_content for doc in docs] text = " ".join(combined_docs) # Split them text_splitter = RecursiveCharacterTextSplitter(chunk_size=1500, chunk_overlap=150) splits = text_splitter.split_text(text) # Build an index embeddings = OpenAIEmbeddings() vectordb = FAISS.from_texts(splits, embeddings) # Build a QA chain qa_chain = RetrievalQA.from_chain_type( llm=ChatOpenAI(model="gpt-3.5-turbo", temperature=0), chain_type="stuff", retriever=vectordb.as_retriever(), ) # Ask a question! query = "Why do we need to zero out the gradient before backprop at each step?" qa_chain.run(query) "We need to zero out the gradient before backprop at each step because the backward pass accumulates gradients in the grad attribute of each parameter. If we don't reset the grad to zero before each backward pass, the gradients will accumulate and add up, leading to incorrect updates and slower convergence. By resetting the grad to zero before each backward pass, we ensure that the gradients are calculated correctly and that the optimization process works as intended." query = "What is the difference between an encoder and decoder?" qa_chain.run(query) 'In the context of transformers, an encoder is a component that reads in a sequence of input tokens and generates a sequence of hidden representations. On the other hand, a decoder is a component that takes in a sequence of hidden representations and generates a sequence of output tokens. The main difference between the two is that the encoder is used to encode the input sequence into a fixed-length representation, while the decoder is used to decode the fixed-length representation into an output sequence. In machine translation, for example, the encoder reads in the source language sentence and generates a fixed-length representation, which is then used by the decoder to generate the target language sentence.' query = "For any token, what are x, k, v, and q?" qa_chain.run(query) 'For any token, x is the input vector that contains the private information of that token, k and q are the key and query vectors respectively, which are produced by forwarding linear modules on x, and v is the vector that is calculated by propagating the same linear module on x again. The key vector represents what the token contains, and the query vector represents what the token is looking for. The vector v is the information that the token will communicate to other tokens if it finds them interesting, and it gets aggregated for the purposes of the self-attention mechanism.' Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/document_transformers/voyageai-reranker/
Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs. ``` Document 1:One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.----------------------------------------------------------------------------------------------------Document 2:As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.----------------------------------------------------------------------------------------------------Document 3:We cannot let this happen.Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.----------------------------------------------------------------------------------------------------Document 4:He will never extinguish their love of freedom. He will never weaken the resolve of the free world.We meet tonight in an America that has lived through two of the hardest years this nation has ever faced.The pandemic has been punishing.And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more.I understand.----------------------------------------------------------------------------------------------------Document 5:As I’ve told Xi Jinping, it is never a good bet to bet against the American people.We’ll create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America.And we’ll do it all to withstand the devastating effects of the climate crisis and promote environmental justice.----------------------------------------------------------------------------------------------------Document 6:I understand.I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it.That’s why one of the first things I did as President was fight to pass the American Rescue Plan.Because people were hurting. We needed to act, and we did.Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.----------------------------------------------------------------------------------------------------Document 7:I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.I’ve worked on these issues a long time.I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.So let’s not abandon our streets. Or choose between safety and equal justice.----------------------------------------------------------------------------------------------------Document 8:My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free.Our troops in Iraq and Afghanistan faced many dangers.One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more.When they came home, many of the world’s fittest and best trained warriors were never the same.Headaches. Numbness. Dizziness.----------------------------------------------------------------------------------------------------Document 9:And tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud.By the end of this year, the deficit will be down to less than half what it was before I took office.The only president ever to cut the deficit by more than one trillion dollars in a single year.Lowering your costs also means demanding more competition.I’m a capitalist, but capitalism without competition isn’t capitalism.It’s exploitation—and it drives up prices.----------------------------------------------------------------------------------------------------Document 10:Headaches. Numbness. Dizziness.A cancer that would put them in a flag-draped coffin.I know.One of those soldiers was my son Major Beau Biden.We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops.But I’m committed to finding out everything we can.Committed to military families like Danielle Robinson from Ohio.The widow of Sergeant First Class Heath Robinson.----------------------------------------------------------------------------------------------------Document 11:I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera.They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun.Officer Mora was 27 years old.Officer Rivera was 22.Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers.----------------------------------------------------------------------------------------------------Document 12:This was a bipartisan effort, and I want to thank the members of both parties who worked to make it happen.We’re done talking about infrastructure weeks.We’re going to have an infrastructure decade.It is going to transform America and put us on a path to win the economic competition of the 21st Century that we face with the rest of the world—particularly with China.As I’ve told Xi Jinping, it is never a good bet to bet against the American people.----------------------------------------------------------------------------------------------------Document 13:So let’s not abandon our streets. Or choose between safety and equal justice.Let’s come together to protect our communities, restore trust, and hold law enforcement accountable.That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.----------------------------------------------------------------------------------------------------Document 14:Let’s pass the Paycheck Fairness Act and paid leave.Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty.Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.And let’s pass the PRO Act when a majority of workers want to form a union—they shouldn’t be stopped.----------------------------------------------------------------------------------------------------Document 15:He met the Ukrainian people.From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.----------------------------------------------------------------------------------------------------Document 16:To all Americans, I will be honest with you, as I’ve always promised. A Russian dictator, invading a foreign country, has costs around the world.And I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers.Tonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world.----------------------------------------------------------------------------------------------------Document 17:A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.----------------------------------------------------------------------------------------------------Document 18:But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century.Vice President Harris and I ran for office with a new economic vision for America.Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom upand the middle out, not from the top down.----------------------------------------------------------------------------------------------------Document 19:Every Administration says they’ll do it, but we are actually doing it.We will buy American to make sure everything from the deck of an aircraft carrier to the steel on highway guardrails are made in America.But to compete for the best jobs of the future, we also need to level the playing field with China and other competitors.----------------------------------------------------------------------------------------------------Document 20:The only nation that can be defined by a single word: possibilities.So on this night, in our 245th year as a nation, I have come to report on the State of the Union.And my report is this: the State of the Union is strong—because you, the American people, are strong.We are stronger today than we were a year ago.And we will be stronger a year from now than we are today.Now is our moment to meet and overcome the challenges of our time.And we will, as one people.One America. ``` Now let’s wrap our base retriever with a `ContextualCompressionRetriever`. We’ll use the Voyage AI reranker to rerank the returned results. ``` Document 1:One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.----------------------------------------------------------------------------------------------------Document 2:So let’s not abandon our streets. Or choose between safety and equal justice.Let’s come together to protect our communities, restore trust, and hold law enforcement accountable.That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.----------------------------------------------------------------------------------------------------Document 3:I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.I’ve worked on these issues a long time.I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.So let’s not abandon our streets. Or choose between safety and equal justice. ``` ``` {'query': 'What did the president say about Ketanji Brown Jackson', 'result': " The president nominated Ketanji Brown Jackson to serve on the United States Supreme Court. "} ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:53.030Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_transformers/voyageai-reranker/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_transformers/voyageai-reranker/", "description": "Voyage AI provides cutting-edge", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3484", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"voyageai-reranker\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:52 GMT", "etag": "W/\"89dc3afd5edde08070614ca9186bca06\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::g2tfq-1713753592696-8fa2a1b965f7" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_transformers/voyageai-reranker/", "property": "og:url" }, { "content": "VoyageAI Reranker | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Voyage AI provides cutting-edge", "property": "og:description" } ], "title": "VoyageAI Reranker | 🦜️🔗 LangChain" }
Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs. Document 1: One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ---------------------------------------------------------------------------------------------------- Document 2: As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. ---------------------------------------------------------------------------------------------------- Document 3: We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. ---------------------------------------------------------------------------------------------------- Document 4: He will never extinguish their love of freedom. He will never weaken the resolve of the free world. We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. The pandemic has been punishing. And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. I understand. ---------------------------------------------------------------------------------------------------- Document 5: As I’ve told Xi Jinping, it is never a good bet to bet against the American people. We’ll create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. And we’ll do it all to withstand the devastating effects of the climate crisis and promote environmental justice. ---------------------------------------------------------------------------------------------------- Document 6: I understand. I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. That’s why one of the first things I did as President was fight to pass the American Rescue Plan. Because people were hurting. We needed to act, and we did. Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis. ---------------------------------------------------------------------------------------------------- Document 7: I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. So let’s not abandon our streets. Or choose between safety and equal justice. ---------------------------------------------------------------------------------------------------- Document 8: My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. Our troops in Iraq and Afghanistan faced many dangers. One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more. When they came home, many of the world’s fittest and best trained warriors were never the same. Headaches. Numbness. Dizziness. ---------------------------------------------------------------------------------------------------- Document 9: And tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. By the end of this year, the deficit will be down to less than half what it was before I took office. The only president ever to cut the deficit by more than one trillion dollars in a single year. Lowering your costs also means demanding more competition. I’m a capitalist, but capitalism without competition isn’t capitalism. It’s exploitation—and it drives up prices. ---------------------------------------------------------------------------------------------------- Document 10: Headaches. Numbness. Dizziness. A cancer that would put them in a flag-draped coffin. I know. One of those soldiers was my son Major Beau Biden. We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But I’m committed to finding out everything we can. Committed to military families like Danielle Robinson from Ohio. The widow of Sergeant First Class Heath Robinson. ---------------------------------------------------------------------------------------------------- Document 11: I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. Officer Mora was 27 years old. Officer Rivera was 22. Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. ---------------------------------------------------------------------------------------------------- Document 12: This was a bipartisan effort, and I want to thank the members of both parties who worked to make it happen. We’re done talking about infrastructure weeks. We’re going to have an infrastructure decade. It is going to transform America and put us on a path to win the economic competition of the 21st Century that we face with the rest of the world—particularly with China. As I’ve told Xi Jinping, it is never a good bet to bet against the American people. ---------------------------------------------------------------------------------------------------- Document 13: So let’s not abandon our streets. Or choose between safety and equal justice. Let’s come together to protect our communities, restore trust, and hold law enforcement accountable. That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers. ---------------------------------------------------------------------------------------------------- Document 14: Let’s pass the Paycheck Fairness Act and paid leave. Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges. And let’s pass the PRO Act when a majority of workers want to form a union—they shouldn’t be stopped. ---------------------------------------------------------------------------------------------------- Document 15: He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. ---------------------------------------------------------------------------------------------------- Document 16: To all Americans, I will be honest with you, as I’ve always promised. A Russian dictator, invading a foreign country, has costs around the world. And I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. Tonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. ---------------------------------------------------------------------------------------------------- Document 17: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. ---------------------------------------------------------------------------------------------------- Document 18: But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. Vice President Harris and I ran for office with a new economic vision for America. Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up and the middle out, not from the top down. ---------------------------------------------------------------------------------------------------- Document 19: Every Administration says they’ll do it, but we are actually doing it. We will buy American to make sure everything from the deck of an aircraft carrier to the steel on highway guardrails are made in America. But to compete for the best jobs of the future, we also need to level the playing field with China and other competitors. ---------------------------------------------------------------------------------------------------- Document 20: The only nation that can be defined by a single word: possibilities. So on this night, in our 245th year as a nation, I have come to report on the State of the Union. And my report is this: the State of the Union is strong—because you, the American people, are strong. We are stronger today than we were a year ago. And we will be stronger a year from now than we are today. Now is our moment to meet and overcome the challenges of our time. And we will, as one people. One America. Now let’s wrap our base retriever with a ContextualCompressionRetriever. We’ll use the Voyage AI reranker to rerank the returned results. Document 1: One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ---------------------------------------------------------------------------------------------------- Document 2: So let’s not abandon our streets. Or choose between safety and equal justice. Let’s come together to protect our communities, restore trust, and hold law enforcement accountable. That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers. ---------------------------------------------------------------------------------------------------- Document 3: I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. So let’s not abandon our streets. Or choose between safety and equal justice. {'query': 'What did the president say about Ketanji Brown Jackson', 'result': " The president nominated Ketanji Brown Jackson to serve on the United States Supreme Court. "}
https://python.langchain.com/docs/integrations/document_loaders/youtube_transcript/
## YouTube transcripts > [YouTube](https://www.youtube.com/) is an online video sharing and social media platform created by Google. This notebook covers how to load documents from `YouTube transcripts`. ``` from langchain_community.document_loaders import YoutubeLoader ``` ``` %pip install --upgrade --quiet youtube-transcript-api ``` ``` loader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=False) ``` ### Add video info[​](#add-video-info "Direct link to Add video info") ``` %pip install --upgrade --quiet pytube ``` ``` loader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True)loader.load() ``` ### Add language preferences[​](#add-language-preferences "Direct link to Add language preferences") Language param : It’s a list of language codes in a descending priority, `en` by default. translation param : It’s a translate preference, you can translate available transcript to your preferred language. ``` loader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True, language=["en", "id"], translation="en",)loader.load() ``` ## YouTube loader from Google Cloud[​](#youtube-loader-from-google-cloud "Direct link to YouTube loader from Google Cloud") ### Prerequisites[​](#prerequisites "Direct link to Prerequisites") 1. Create a Google Cloud project or use an existing project 2. Enable the [Youtube Api](https://console.cloud.google.com/apis/enableflow?apiid=youtube.googleapis.com&project=sixth-grammar-344520) 3. [Authorize credentials for desktop app](https://developers.google.com/drive/api/quickstart/python#authorize_credentials_for_a_desktop_application) 4. `pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib youtube-transcript-api` ### 🧑 Instructions for ingesting your Google Docs data[​](#instructions-for-ingesting-your-google-docs-data "Direct link to 🧑 Instructions for ingesting your Google Docs data") By default, the `GoogleDriveLoader` expects the `credentials.json` file to be `~/.credentials/credentials.json`, but this is configurable using the `credentials_file` keyword argument. Same thing with `token.json`. Note that `token.json` will be created automatically the first time you use the loader. `GoogleApiYoutubeLoader` can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL: Note depending on your set up, the `service_account_path` needs to be set up. See [here](https://developers.google.com/drive/api/v3/quickstart/python) for more details. ``` # Init the GoogleApiClientfrom pathlib import Pathfrom langchain_community.document_loaders import GoogleApiClient, GoogleApiYoutubeLoadergoogle_api_client = GoogleApiClient(credentials_path=Path("your_path_creds.json"))# Use a Channelyoutube_loader_channel = GoogleApiYoutubeLoader( google_api_client=google_api_client, channel_name="Reducible", captions_language="en",)# Use Youtube Idsyoutube_loader_ids = GoogleApiYoutubeLoader( google_api_client=google_api_client, video_ids=["TrdevFK_am4"], add_video_info=True)# returns a list of Documentsyoutube_loader_channel.load() ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:53.376Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/youtube_transcript/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/youtube_transcript/", "description": "YouTube is an online video sharing and", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3486", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"youtube_transcript\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:52 GMT", "etag": "W/\"574dc2dce60bc946457822b5b4f58b37\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::qk8bd-1713753592703-2064043307c0" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/youtube_transcript/", "property": "og:url" }, { "content": "YouTube transcripts | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "YouTube is an online video sharing and", "property": "og:description" } ], "title": "YouTube transcripts | 🦜️🔗 LangChain" }
YouTube transcripts YouTube is an online video sharing and social media platform created by Google. This notebook covers how to load documents from YouTube transcripts. from langchain_community.document_loaders import YoutubeLoader %pip install --upgrade --quiet youtube-transcript-api loader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=False ) Add video info​ %pip install --upgrade --quiet pytube loader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True ) loader.load() Add language preferences​ Language param : It’s a list of language codes in a descending priority, en by default. translation param : It’s a translate preference, you can translate available transcript to your preferred language. loader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True, language=["en", "id"], translation="en", ) loader.load() YouTube loader from Google Cloud​ Prerequisites​ Create a Google Cloud project or use an existing project Enable the Youtube Api Authorize credentials for desktop app pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib youtube-transcript-api 🧑 Instructions for ingesting your Google Docs data​ By default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_file keyword argument. Same thing with token.json. Note that token.json will be created automatically the first time you use the loader. GoogleApiYoutubeLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL: Note depending on your set up, the service_account_path needs to be set up. See here for more details. # Init the GoogleApiClient from pathlib import Path from langchain_community.document_loaders import GoogleApiClient, GoogleApiYoutubeLoader google_api_client = GoogleApiClient(credentials_path=Path("your_path_creds.json")) # Use a Channel youtube_loader_channel = GoogleApiYoutubeLoader( google_api_client=google_api_client, channel_name="Reducible", captions_language="en", ) # Use Youtube Ids youtube_loader_ids = GoogleApiYoutubeLoader( google_api_client=google_api_client, video_ids=["TrdevFK_am4"], add_video_info=True ) # returns a list of Documents youtube_loader_channel.load()
https://python.langchain.com/docs/integrations/document_loaders/yuque/
This notebook covers how to load documents from `Yuque`. You can obtain the personal access token by clicking on your personal avatar in the [Personal Settings](https://www.yuque.com/settings/tokens) page. ``` from langchain_community.document_loaders import YuqueLoader ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:53.794Z", "loadedUrl": "https://python.langchain.com/docs/integrations/document_loaders/yuque/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/document_loaders/yuque/", "description": "Yuque is a professional cloud-based", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3486", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"yuque\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:53 GMT", "etag": "W/\"c0a51b9a35268f8adb8630ad03c4c922\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::qw5cn-1713753593357-76a72268e69c" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/document_loaders/yuque/", "property": "og:url" }, { "content": "Yuque | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Yuque is a professional cloud-based", "property": "og:description" } ], "title": "Yuque | 🦜️🔗 LangChain" }
This notebook covers how to load documents from Yuque. You can obtain the personal access token by clicking on your personal avatar in the Personal Settings page. from langchain_community.document_loaders import YuqueLoader
https://python.langchain.com/docs/integrations/graphs/amazon_neptune_open_cypher/
## Amazon Neptune with Cypher > [Amazon Neptune](https://aws.amazon.com/neptune/) is a high-performance graph analytics and serverless database for superior scalability and availability. > > This example shows the QA chain that queries the `Neptune` graph database using `openCypher` and returns a human-readable response. > > [Cypher](https://en.wikipedia.org/wiki/Cypher_(query_language)) is a declarative graph query language that allows for expressive and efficient data querying in a property graph. > > [openCypher](https://opencypher.org/) is an open-source implementation of Cypher. # Neptune Open Cypher QA Chain This QA chain queries Amazon Neptune using openCypher and returns human readable response LangChain supports both [Neptune Database](https://docs.aws.amazon.com/neptune/latest/userguide/intro.html) and [Neptune Analytics](https://docs.aws.amazon.com/neptune-analytics/latest/userguide/what-is-neptune-analytics.html) with `NeptuneOpenCypherQAChain` Neptune Database is a serverless graph database designed for optimal scalability and availability. It provides a solution for graph database workloads that need to scale to 100,000 queries per second, Multi-AZ high availability, and multi-Region deployments. You can use Neptune Database for social networking, fraud alerting, and Customer 360 applications. Neptune Analytics is an analytics database engine that can quickly analyze large amounts of graph data in memory to get insights and find trends. Neptune Analytics is a solution for quickly analyzing existing graph databases or graph datasets stored in a data lake. It uses popular graph analytic algorithms and low-latency analytic queries. ## Using Neptune Database[​](#using-neptune-database "Direct link to Using Neptune Database") ``` from langchain_community.graphs import NeptuneGraphhost = "<neptune-host>"port = 8182use_https = Truegraph = NeptuneGraph(host=host, port=port, use_https=use_https) ``` ### Using Neptune Analytics[​](#using-neptune-analytics "Direct link to Using Neptune Analytics") ``` from langchain_community.graphs import NeptuneAnalyticsGraphgraph = NeptuneAnalyticsGraph(graph_identifier="<neptune-analytics-graph-id>") ``` ## Using NeptuneOpenCypherQAChain[​](#using-neptuneopencypherqachain "Direct link to Using NeptuneOpenCypherQAChain") This QA chain queries Neptune graph database using openCypher and returns human readable response. ``` from langchain.chains import NeptuneOpenCypherQAChainfrom langchain_openai import ChatOpenAIllm = ChatOpenAI(temperature=0, model="gpt-4")chain = NeptuneOpenCypherQAChain.from_llm(llm=llm, graph=graph)chain.invoke("how many outgoing routes does the Austin airport have?") ``` ``` 'The Austin airport has 98 outgoing routes.' ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:53.946Z", "loadedUrl": "https://python.langchain.com/docs/integrations/graphs/amazon_neptune_open_cypher/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/graphs/amazon_neptune_open_cypher/", "description": "Amazon Neptune is a", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3877", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"amazon_neptune_open_cypher\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:53 GMT", "etag": "W/\"ca338c6c5e8dc4acb0fd8db0bb106db6\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::bvpt5-1713753593820-8923abe0e63a" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/graphs/amazon_neptune_open_cypher/", "property": "og:url" }, { "content": "Amazon Neptune with Cypher | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Amazon Neptune is a", "property": "og:description" } ], "title": "Amazon Neptune with Cypher | 🦜️🔗 LangChain" }
Amazon Neptune with Cypher Amazon Neptune is a high-performance graph analytics and serverless database for superior scalability and availability. This example shows the QA chain that queries the Neptune graph database using openCypher and returns a human-readable response. Cypher is a declarative graph query language that allows for expressive and efficient data querying in a property graph. openCypher is an open-source implementation of Cypher. # Neptune Open Cypher QA Chain This QA chain queries Amazon Neptune using openCypher and returns human readable response LangChain supports both Neptune Database and Neptune Analytics with NeptuneOpenCypherQAChain Neptune Database is a serverless graph database designed for optimal scalability and availability. It provides a solution for graph database workloads that need to scale to 100,000 queries per second, Multi-AZ high availability, and multi-Region deployments. You can use Neptune Database for social networking, fraud alerting, and Customer 360 applications. Neptune Analytics is an analytics database engine that can quickly analyze large amounts of graph data in memory to get insights and find trends. Neptune Analytics is a solution for quickly analyzing existing graph databases or graph datasets stored in a data lake. It uses popular graph analytic algorithms and low-latency analytic queries. Using Neptune Database​ from langchain_community.graphs import NeptuneGraph host = "<neptune-host>" port = 8182 use_https = True graph = NeptuneGraph(host=host, port=port, use_https=use_https) Using Neptune Analytics​ from langchain_community.graphs import NeptuneAnalyticsGraph graph = NeptuneAnalyticsGraph(graph_identifier="<neptune-analytics-graph-id>") Using NeptuneOpenCypherQAChain​ This QA chain queries Neptune graph database using openCypher and returns human readable response. from langchain.chains import NeptuneOpenCypherQAChain from langchain_openai import ChatOpenAI llm = ChatOpenAI(temperature=0, model="gpt-4") chain = NeptuneOpenCypherQAChain.from_llm(llm=llm, graph=graph) chain.invoke("how many outgoing routes does the Austin airport have?") 'The Austin airport has 98 outgoing routes.'
https://python.langchain.com/docs/integrations/graphs/nebula_graph/
## NebulaGraph > [NebulaGraph](https://www.nebula-graph.io/) is an open-source, distributed, scalable, lightning-fast graph database built for super large-scale graphs with milliseconds of latency. It uses the `nGQL` graph query language. > > [nGQL](https://docs.nebula-graph.io/3.0.0/3.ngql-guide/1.nGQL-overview/1.overview/) is a declarative graph query language for `NebulaGraph`. It allows expressive and efficient graph patterns. `nGQL` is designed for both developers and operations professionals. `nGQL` is an SQL-like query language. This notebook shows how to use LLMs to provide a natural language interface to `NebulaGraph` database. ## Setting up[​](#setting-up "Direct link to Setting up") You can start the `NebulaGraph` cluster as a Docker container by running the following script: ``` curl -fsSL nebula-up.siwei.io/install.sh | bash ``` Other options are: - Install as a [Docker Desktop Extension](https://www.docker.com/blog/distributed-cloud-native-graph-database-nebulagraph-docker-extension/). See [here](https://docs.nebula-graph.io/3.5.0/2.quick-start/1.quick-start-workflow/) - NebulaGraph Cloud Service. See [here](https://www.nebula-graph.io/cloud) - Deploy from package, source code, or via Kubernetes. See [here](https://docs.nebula-graph.io/) Once the cluster is running, we could create the `SPACE` and `SCHEMA` for the database. ``` %pip install --upgrade --quiet ipython-ngql%load_ext ngql# connect ngql jupyter extension to nebulagraph%ngql --address 127.0.0.1 --port 9669 --user root --password nebula# create a new space%ngql CREATE SPACE IF NOT EXISTS langchain(partition_num=1, replica_factor=1, vid_type=fixed_string(128)); ``` ``` # Wait for a few seconds for the space to be created.%ngql USE langchain; ``` Create the schema, for full dataset, refer [here](https://www.siwei.io/en/nebulagraph-etl-dbt/). ``` %%ngqlCREATE TAG IF NOT EXISTS movie(name string);CREATE TAG IF NOT EXISTS person(name string, birthdate string);CREATE EDGE IF NOT EXISTS acted_in();CREATE TAG INDEX IF NOT EXISTS person_index ON person(name(128));CREATE TAG INDEX IF NOT EXISTS movie_index ON movie(name(128)); ``` Wait for schema creation to complete, then we can insert some data. ``` %%ngqlINSERT VERTEX person(name, birthdate) VALUES "Al Pacino":("Al Pacino", "1940-04-25");INSERT VERTEX movie(name) VALUES "The Godfather II":("The Godfather II");INSERT VERTEX movie(name) VALUES "The Godfather Coda: The Death of Michael Corleone":("The Godfather Coda: The Death of Michael Corleone");INSERT EDGE acted_in() VALUES "Al Pacino"->"The Godfather II":();INSERT EDGE acted_in() VALUES "Al Pacino"->"The Godfather Coda: The Death of Michael Corleone":(); ``` ``` from langchain.chains import NebulaGraphQAChainfrom langchain_community.graphs import NebulaGraphfrom langchain_openai import ChatOpenAI ``` ``` graph = NebulaGraph( space="langchain", username="root", password="nebula", address="127.0.0.1", port=9669, session_pool_size=30,) ``` ## Refresh graph schema information[​](#refresh-graph-schema-information "Direct link to Refresh graph schema information") If the schema of database changes, you can refresh the schema information needed to generate nGQL statements. ``` Node properties: [{'tag': 'movie', 'properties': [('name', 'string')]}, {'tag': 'person', 'properties': [('name', 'string'), ('birthdate', 'string')]}]Edge properties: [{'edge': 'acted_in', 'properties': []}]Relationships: ['(:person)-[:acted_in]->(:movie)'] ``` ## Querying the graph[​](#querying-the-graph "Direct link to Querying the graph") We can now use the graph cypher QA chain to ask question of the graph ``` chain = NebulaGraphQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True) ``` ``` chain.run("Who played in The Godfather II?") ``` ``` > Entering new NebulaGraphQAChain chain...Generated nGQL:MATCH (p:`person`)-[:acted_in]->(m:`movie`) WHERE m.`movie`.`name` == 'The Godfather II'RETURN p.`person`.`name`Full Context:{'p.person.name': ['Al Pacino']}> Finished chain. ``` ``` 'Al Pacino played in The Godfather II.' ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:54.033Z", "loadedUrl": "https://python.langchain.com/docs/integrations/graphs/nebula_graph/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/graphs/nebula_graph/", "description": "NebulaGraph is an open-source,", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3484", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"nebula_graph\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:53 GMT", "etag": "W/\"6c75d3cc96f6b3e544482e808635556c\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::5czlr-1713753593637-ccabe502627c" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/graphs/nebula_graph/", "property": "og:url" }, { "content": "NebulaGraph | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "NebulaGraph is an open-source,", "property": "og:description" } ], "title": "NebulaGraph | 🦜️🔗 LangChain" }
NebulaGraph NebulaGraph is an open-source, distributed, scalable, lightning-fast graph database built for super large-scale graphs with milliseconds of latency. It uses the nGQL graph query language. nGQL is a declarative graph query language for NebulaGraph. It allows expressive and efficient graph patterns. nGQL is designed for both developers and operations professionals. nGQL is an SQL-like query language. This notebook shows how to use LLMs to provide a natural language interface to NebulaGraph database. Setting up​ You can start the NebulaGraph cluster as a Docker container by running the following script: curl -fsSL nebula-up.siwei.io/install.sh | bash Other options are: - Install as a Docker Desktop Extension. See here - NebulaGraph Cloud Service. See here - Deploy from package, source code, or via Kubernetes. See here Once the cluster is running, we could create the SPACE and SCHEMA for the database. %pip install --upgrade --quiet ipython-ngql %load_ext ngql # connect ngql jupyter extension to nebulagraph %ngql --address 127.0.0.1 --port 9669 --user root --password nebula # create a new space %ngql CREATE SPACE IF NOT EXISTS langchain(partition_num=1, replica_factor=1, vid_type=fixed_string(128)); # Wait for a few seconds for the space to be created. %ngql USE langchain; Create the schema, for full dataset, refer here. %%ngql CREATE TAG IF NOT EXISTS movie(name string); CREATE TAG IF NOT EXISTS person(name string, birthdate string); CREATE EDGE IF NOT EXISTS acted_in(); CREATE TAG INDEX IF NOT EXISTS person_index ON person(name(128)); CREATE TAG INDEX IF NOT EXISTS movie_index ON movie(name(128)); Wait for schema creation to complete, then we can insert some data. %%ngql INSERT VERTEX person(name, birthdate) VALUES "Al Pacino":("Al Pacino", "1940-04-25"); INSERT VERTEX movie(name) VALUES "The Godfather II":("The Godfather II"); INSERT VERTEX movie(name) VALUES "The Godfather Coda: The Death of Michael Corleone":("The Godfather Coda: The Death of Michael Corleone"); INSERT EDGE acted_in() VALUES "Al Pacino"->"The Godfather II":(); INSERT EDGE acted_in() VALUES "Al Pacino"->"The Godfather Coda: The Death of Michael Corleone":(); from langchain.chains import NebulaGraphQAChain from langchain_community.graphs import NebulaGraph from langchain_openai import ChatOpenAI graph = NebulaGraph( space="langchain", username="root", password="nebula", address="127.0.0.1", port=9669, session_pool_size=30, ) Refresh graph schema information​ If the schema of database changes, you can refresh the schema information needed to generate nGQL statements. Node properties: [{'tag': 'movie', 'properties': [('name', 'string')]}, {'tag': 'person', 'properties': [('name', 'string'), ('birthdate', 'string')]}] Edge properties: [{'edge': 'acted_in', 'properties': []}] Relationships: ['(:person)-[:acted_in]->(:movie)'] Querying the graph​ We can now use the graph cypher QA chain to ask question of the graph chain = NebulaGraphQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True ) chain.run("Who played in The Godfather II?") > Entering new NebulaGraphQAChain chain... Generated nGQL: MATCH (p:`person`)-[:acted_in]->(m:`movie`) WHERE m.`movie`.`name` == 'The Godfather II' RETURN p.`person`.`name` Full Context: {'p.person.name': ['Al Pacino']} > Finished chain. 'Al Pacino played in The Godfather II.' Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/graphs/amazon_neptune_sparql/
## Amazon Neptune with SPARQL > [Amazon Neptune](https://aws.amazon.com/neptune/) is a high-performance graph analytics and serverless database for superior scalability and availability. > > This example shows the QA chain that queries [Resource Description Framework (RDF)](https://en.wikipedia.org/wiki/Resource_Description_Framework) data in an `Amazon Neptune` graph database using the `SPARQL` query language and returns a human-readable response. > > [SPARQL](https://en.wikipedia.org/wiki/SPARQL) is a standard query language for `RDF` graphs. This example uses a `NeptuneRdfGraph` class that connects with the Neptune database and loads its schema. The `NeptuneSparqlQAChain` is used to connect the graph and LLM to ask natural language questions. This notebook demonstrates an example using organizational data. Requirements for running this notebook: - Neptune 1.2.x cluster accessible from this notebook - Kernel with Python 3.9 or higher - For Bedrock access, ensure IAM role has this policy ``` { "Action": [ "bedrock:ListFoundationModels", "bedrock:InvokeModel" ], "Resource": "*", "Effect": "Allow"} ``` * S3 bucket for staging sample data. The bucket should be in the same account/region as Neptune. ## Setting up[​](#setting-up "Direct link to Setting up") ### Seed the W3C organizational data[​](#seed-the-w3c-organizational-data "Direct link to Seed the W3C organizational data") Seed the W3C organizational data, W3C org ontology plus some instances. You will need an S3 bucket in the same region and account. Set `STAGE_BUCKET`as the name of that bucket. ``` STAGE_BUCKET = "<bucket-name>" ``` ``` %%bash -s "$STAGE_BUCKET"rm -rf datamkdir -p datacd dataecho getting org ontology and sample org instanceswget http://www.w3.org/ns/org.ttl wget https://raw.githubusercontent.com/aws-samples/amazon-neptune-ontology-example-blog/main/data/example_org.ttl echo Copying org ttl to S3aws s3 cp org.ttl s3://$1/org.ttlaws s3 cp example_org.ttl s3://$1/example_org.ttl ``` Bulk-load the org ttl - both ontology and instances ``` %load -s s3://{STAGE_BUCKET} -f turtle --store-to loadres --run ``` ``` %load_status {loadres['payload']['loadId']} --errors --details ``` ### Setup Chain[​](#setup-chain "Direct link to Setup Chain") ``` !pip install --upgrade --quiet langchain langchain-community langchain-aws ``` \*\* Restart kernel \*\* ### Prepare an example[​](#prepare-an-example "Direct link to Prepare an example") ``` EXAMPLES = """<question>Find organizations.</question><sparql>PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX org: <http://www.w3.org/ns/org#> select ?org ?orgName where {{ ?org rdfs:label ?orgName .}} </sparql><question>Find sites of an organization</question><sparql>PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX org: <http://www.w3.org/ns/org#> select ?org ?orgName ?siteName where {{ ?org rdfs:label ?orgName . ?org org:hasSite/rdfs:label ?siteName . }} </sparql><question>Find suborganizations of an organization</question><sparql>PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX org: <http://www.w3.org/ns/org#> select ?org ?orgName ?subName where {{ ?org rdfs:label ?orgName . ?org org:hasSubOrganization/rdfs:label ?subName .}} </sparql><question>Find organizational units of an organization</question><sparql>PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX org: <http://www.w3.org/ns/org#> select ?org ?orgName ?unitName where {{ ?org rdfs:label ?orgName . ?org org:hasUnit/rdfs:label ?unitName . }} </sparql><question>Find members of an organization. Also find their manager, or the member they report to.</question><sparql>PREFIX org: <http://www.w3.org/ns/org#> PREFIX foaf: <http://xmlns.com/foaf/0.1/> select * where {{ ?person rdf:type foaf:Person . ?person org:memberOf ?org . OPTIONAL {{ ?person foaf:firstName ?firstName . }} OPTIONAL {{ ?person foaf:family_name ?lastName . }} OPTIONAL {{ ?person org:reportsTo ??manager }} .}}</sparql><question>Find change events, such as mergers and acquisitions, of an organization</question><sparql>PREFIX org: <http://www.w3.org/ns/org#> select ?event ?prop ?obj where {{ ?org rdfs:label ?orgName . ?event rdf:type org:ChangeEvent . ?event org:originalOrganization ?origOrg . ?event org:resultingOrganization ?resultingOrg .}}</sparql>""" ``` ``` import boto3from langchain.chains.graph_qa.neptune_sparql import NeptuneSparqlQAChainfrom langchain_aws import ChatBedrockfrom langchain_community.graphs import NeptuneRdfGraphhost = "<your host>"port = 8182 # change if differentregion = "us-east-1" # change if differentgraph = NeptuneRdfGraph(host=host, port=port, use_iam_auth=True, region_name=region)# Optionally change the schema# elems = graph.get_schema_elements# change elems ...# graph.load_schema(elems)MODEL_ID = "anthropic.claude-v2"bedrock_client = boto3.client("bedrock-runtime")llm = ChatBedrock(model_id=MODEL_ID, client=bedrock_client)chain = NeptuneSparqlQAChain.from_llm( llm=llm, graph=graph, examples=EXAMPLES, verbose=True, top_K=10, return_intermediate_steps=True, return_direct=False,) ``` ## Ask questions[​](#ask-questions "Direct link to Ask questions") Depends on the data we ingested above ``` chain.invoke("""How many organizations are in the graph""") ``` ``` chain.invoke("""Are there any mergers or acquisitions""") ``` ``` chain.invoke("""Find organizations""") ``` ``` chain.invoke("""Find sites of MegaSystems or MegaFinancial""") ``` ``` chain.invoke("""Find a member who is manager of one or more members.""") ``` ``` chain.invoke("""Find five members and who their manager is.""") ``` ``` chain.invoke( """Find org units or suborganizations of The Mega Group. What are the sites of those units?""") ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:54.202Z", "loadedUrl": "https://python.langchain.com/docs/integrations/graphs/amazon_neptune_sparql/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/graphs/amazon_neptune_sparql/", "description": "Amazon Neptune is a", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"amazon_neptune_sparql\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:53 GMT", "etag": "W/\"2f2c5257921ccb03b2c707d6952f0111\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::vlt2t-1713753593636-d70667c82885" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/graphs/amazon_neptune_sparql/", "property": "og:url" }, { "content": "Amazon Neptune with SPARQL | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Amazon Neptune is a", "property": "og:description" } ], "title": "Amazon Neptune with SPARQL | 🦜️🔗 LangChain" }
Amazon Neptune with SPARQL Amazon Neptune is a high-performance graph analytics and serverless database for superior scalability and availability. This example shows the QA chain that queries Resource Description Framework (RDF) data in an Amazon Neptune graph database using the SPARQL query language and returns a human-readable response. SPARQL is a standard query language for RDF graphs. This example uses a NeptuneRdfGraph class that connects with the Neptune database and loads its schema. The NeptuneSparqlQAChain is used to connect the graph and LLM to ask natural language questions. This notebook demonstrates an example using organizational data. Requirements for running this notebook: - Neptune 1.2.x cluster accessible from this notebook - Kernel with Python 3.9 or higher - For Bedrock access, ensure IAM role has this policy { "Action": [ "bedrock:ListFoundationModels", "bedrock:InvokeModel" ], "Resource": "*", "Effect": "Allow" } S3 bucket for staging sample data. The bucket should be in the same account/region as Neptune. Setting up​ Seed the W3C organizational data​ Seed the W3C organizational data, W3C org ontology plus some instances. You will need an S3 bucket in the same region and account. Set STAGE_BUCKETas the name of that bucket. STAGE_BUCKET = "<bucket-name>" %%bash -s "$STAGE_BUCKET" rm -rf data mkdir -p data cd data echo getting org ontology and sample org instances wget http://www.w3.org/ns/org.ttl wget https://raw.githubusercontent.com/aws-samples/amazon-neptune-ontology-example-blog/main/data/example_org.ttl echo Copying org ttl to S3 aws s3 cp org.ttl s3://$1/org.ttl aws s3 cp example_org.ttl s3://$1/example_org.ttl Bulk-load the org ttl - both ontology and instances %load -s s3://{STAGE_BUCKET} -f turtle --store-to loadres --run %load_status {loadres['payload']['loadId']} --errors --details Setup Chain​ !pip install --upgrade --quiet langchain langchain-community langchain-aws ** Restart kernel ** Prepare an example​ EXAMPLES = """ <question> Find organizations. </question> <sparql> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX org: <http://www.w3.org/ns/org#> select ?org ?orgName where {{ ?org rdfs:label ?orgName . }} </sparql> <question> Find sites of an organization </question> <sparql> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX org: <http://www.w3.org/ns/org#> select ?org ?orgName ?siteName where {{ ?org rdfs:label ?orgName . ?org org:hasSite/rdfs:label ?siteName . }} </sparql> <question> Find suborganizations of an organization </question> <sparql> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX org: <http://www.w3.org/ns/org#> select ?org ?orgName ?subName where {{ ?org rdfs:label ?orgName . ?org org:hasSubOrganization/rdfs:label ?subName . }} </sparql> <question> Find organizational units of an organization </question> <sparql> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX org: <http://www.w3.org/ns/org#> select ?org ?orgName ?unitName where {{ ?org rdfs:label ?orgName . ?org org:hasUnit/rdfs:label ?unitName . }} </sparql> <question> Find members of an organization. Also find their manager, or the member they report to. </question> <sparql> PREFIX org: <http://www.w3.org/ns/org#> PREFIX foaf: <http://xmlns.com/foaf/0.1/> select * where {{ ?person rdf:type foaf:Person . ?person org:memberOf ?org . OPTIONAL {{ ?person foaf:firstName ?firstName . }} OPTIONAL {{ ?person foaf:family_name ?lastName . }} OPTIONAL {{ ?person org:reportsTo ??manager }} . }} </sparql> <question> Find change events, such as mergers and acquisitions, of an organization </question> <sparql> PREFIX org: <http://www.w3.org/ns/org#> select ?event ?prop ?obj where {{ ?org rdfs:label ?orgName . ?event rdf:type org:ChangeEvent . ?event org:originalOrganization ?origOrg . ?event org:resultingOrganization ?resultingOrg . }} </sparql> """ import boto3 from langchain.chains.graph_qa.neptune_sparql import NeptuneSparqlQAChain from langchain_aws import ChatBedrock from langchain_community.graphs import NeptuneRdfGraph host = "<your host>" port = 8182 # change if different region = "us-east-1" # change if different graph = NeptuneRdfGraph(host=host, port=port, use_iam_auth=True, region_name=region) # Optionally change the schema # elems = graph.get_schema_elements # change elems ... # graph.load_schema(elems) MODEL_ID = "anthropic.claude-v2" bedrock_client = boto3.client("bedrock-runtime") llm = ChatBedrock(model_id=MODEL_ID, client=bedrock_client) chain = NeptuneSparqlQAChain.from_llm( llm=llm, graph=graph, examples=EXAMPLES, verbose=True, top_K=10, return_intermediate_steps=True, return_direct=False, ) Ask questions​ Depends on the data we ingested above chain.invoke("""How many organizations are in the graph""") chain.invoke("""Are there any mergers or acquisitions""") chain.invoke("""Find organizations""") chain.invoke("""Find sites of MegaSystems or MegaFinancial""") chain.invoke("""Find a member who is manager of one or more members.""") chain.invoke("""Find five members and who their manager is.""") chain.invoke( """Find org units or suborganizations of The Mega Group. What are the sites of those units?""" )
https://python.langchain.com/docs/integrations/graphs/neo4j_cypher/
## Neo4j > [Neo4j](https://neo4j.com/docs/getting-started/) is a graph database management system developed by `Neo4j, Inc`. > The data elements `Neo4j` stores are nodes, edges connecting them, and attributes of nodes and edges. Described by its developers as an ACID-compliant transactional database with native graph storage and processing, `Neo4j` is available in a non-open-source “community edition” licensed with a modification of the GNU General Public License, with online backup and high availability extensions licensed under a closed-source commercial license. Neo also licenses `Neo4j` with these extensions under closed-source commercial terms. > This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the `Cypher` query language. > [Cypher](https://en.wikipedia.org/wiki/Cypher_(query_language)) is a declarative graph query language that allows for expressive and efficient data querying in a property graph. ## Settin up[​](#settin-up "Direct link to Settin up") You will need to have a running `Neo4j` instance. One option is to create a [free Neo4j database instance in their Aura cloud service](https://neo4j.com/cloud/platform/aura-graph-database/). You can also run the database locally using the [Neo4j Desktop application](https://neo4j.com/download/), or running a docker container. You can run a local docker container by running the executing the following script: ``` docker run \ --name neo4j \ -p 7474:7474 -p 7687:7687 \ -d \ -e NEO4J_AUTH=neo4j/pleaseletmein \ -e NEO4J_PLUGINS=\[\"apoc\"\] \ neo4j:latest ``` If you are using the docker container, you need to wait a couple of second for the database to start. ``` from langchain.chains import GraphCypherQAChainfrom langchain_community.graphs import Neo4jGraphfrom langchain_openai import ChatOpenAI ``` ``` graph = Neo4jGraph( url="bolt://localhost:7687", username="neo4j", password="pleaseletmein") ``` ## Seeding the database[​](#seeding-the-database "Direct link to Seeding the database") Assuming your database is empty, you can populate it using Cypher query language. The following Cypher statement is idempotent, which means the database information will be the same if you run it one or multiple times. ``` graph.query( """MERGE (m:Movie {name:"Top Gun"})WITH mUNWIND ["Tom Cruise", "Val Kilmer", "Anthony Edwards", "Meg Ryan"] AS actorMERGE (a:Actor {name:actor})MERGE (a)-[:ACTED_IN]->(m)""") ``` ## Refresh graph schema information[​](#refresh-graph-schema-information "Direct link to Refresh graph schema information") If the schema of database changes, you can refresh the schema information needed to generate Cypher statements. ``` Node properties are the following:Movie {name: STRING},Actor {name: STRING}Relationship properties are the following:The relationships are the following:(:Actor)-[:ACTED_IN]->(:Movie) ``` ## Querying the graph[​](#querying-the-graph "Direct link to Querying the graph") We can now use the graph cypher QA chain to ask question of the graph ``` chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True) ``` ``` chain.run("Who played in Top Gun?") ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})RETURN a.nameFull Context:[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]> Finished chain. ``` ``` 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.' ``` ## Limit the number of results[​](#limit-the-number-of-results "Direct link to Limit the number of results") You can limit the number of results from the Cypher QA Chain using the `top_k` parameter. The default is 10. ``` chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, top_k=2) ``` ``` chain.run("Who played in Top Gun?") ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})RETURN a.nameFull Context:[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}]> Finished chain. ``` ``` 'Tom Cruise and Val Kilmer played in Top Gun.' ``` You can return intermediate steps from the Cypher QA Chain using the `return_intermediate_steps` parameter ``` chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_intermediate_steps=True) ``` ``` result = chain("Who played in Top Gun?")print(f"Intermediate steps: {result['intermediate_steps']}")print(f"Final answer: {result['result']}") ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})RETURN a.nameFull Context:[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]> Finished chain.Intermediate steps: [{'query': "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\nRETURN a.name"}, {'context': [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]}]Final answer: Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun. ``` ## Return direct results[​](#return-direct-results "Direct link to Return direct results") You can return direct results from the Cypher QA Chain using the `return_direct` parameter ``` chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_direct=True) ``` ``` chain.run("Who played in Top Gun?") ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})RETURN a.name> Finished chain. ``` ``` [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}] ``` ## Add examples in the Cypher generation prompt[​](#add-examples-in-the-cypher-generation-prompt "Direct link to Add examples in the Cypher generation prompt") You can define the Cypher statement you want the LLM to generate for particular questions ``` from langchain_core.prompts.prompt import PromptTemplateCYPHER_GENERATION_TEMPLATE = """Task:Generate Cypher statement to query a graph database.Instructions:Use only the provided relationship types and properties in the schema.Do not use any other relationship types or properties that are not provided.Schema:{schema}Note: Do not include any explanations or apologies in your responses.Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.Do not include any text except the generated Cypher statement.Examples: Here are a few examples of generated Cypher statements for particular questions:# How many people played in Top Gun?MATCH (m:Movie {{title:"Top Gun"}})<-[:ACTED_IN]-()RETURN count(*) AS numberOfActorsThe question is:{question}"""CYPHER_GENERATION_PROMPT = PromptTemplate( input_variables=["schema", "question"], template=CYPHER_GENERATION_TEMPLATE)chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, cypher_prompt=CYPHER_GENERATION_PROMPT,) ``` ``` chain.run("How many people played in Top Gun?") ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (m:Movie {name:"Top Gun"})<-[:ACTED_IN]-(:Actor)RETURN count(*) AS numberOfActorsFull Context:[{'numberOfActors': 4}]> Finished chain. ``` ``` 'Four people played in Top Gun.' ``` ## Use separate LLMs for Cypher and answer generation[​](#use-separate-llms-for-cypher-and-answer-generation "Direct link to Use separate LLMs for Cypher and answer generation") You can use the `cypher_llm` and `qa_llm` parameters to define different llms ``` chain = GraphCypherQAChain.from_llm( graph=graph, cypher_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), qa_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"), verbose=True,) ``` ``` chain.run("Who played in Top Gun?") ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})RETURN a.nameFull Context:[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]> Finished chain. ``` ``` 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.' ``` ## Ignore specified node and relationship types[​](#ignore-specified-node-and-relationship-types "Direct link to Ignore specified node and relationship types") You can use `include_types` or `exclude_types` to ignore parts of the graph schema when generating Cypher statements. ``` chain = GraphCypherQAChain.from_llm( graph=graph, cypher_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), qa_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"), verbose=True, exclude_types=["Movie"],) ``` ``` # Inspect graph schemaprint(chain.graph_schema) ``` ``` Node properties are the following:Actor {name: STRING}Relationship properties are the following:The relationships are the following: ``` ## Validate generated Cypher statements[​](#validate-generated-cypher-statements "Direct link to Validate generated Cypher statements") You can use the `validate_cypher` parameter to validate and correct relationship directions in generated Cypher statements ``` chain = GraphCypherQAChain.from_llm( llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), graph=graph, verbose=True, validate_cypher=True,) ``` ``` chain.run("Who played in Top Gun?") ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})RETURN a.nameFull Context:[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]> Finished chain. ``` ``` 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.' ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:54.603Z", "loadedUrl": "https://python.langchain.com/docs/integrations/graphs/neo4j_cypher/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/graphs/neo4j_cypher/", "description": "Neo4j is a graph database", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3484", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"neo4j_cypher\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:53 GMT", "etag": "W/\"67a19658de91028af41c2e4226f0fb14\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::qfv6k-1713753593635-958770bacb59" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/graphs/neo4j_cypher/", "property": "og:url" }, { "content": "Neo4j | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Neo4j is a graph database", "property": "og:description" } ], "title": "Neo4j | 🦜️🔗 LangChain" }
Neo4j Neo4j is a graph database management system developed by Neo4j, Inc. The data elements Neo4j stores are nodes, edges connecting them, and attributes of nodes and edges. Described by its developers as an ACID-compliant transactional database with native graph storage and processing, Neo4j is available in a non-open-source “community edition” licensed with a modification of the GNU General Public License, with online backup and high availability extensions licensed under a closed-source commercial license. Neo also licenses Neo4j with these extensions under closed-source commercial terms. This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language. Cypher is a declarative graph query language that allows for expressive and efficient data querying in a property graph. Settin up​ You will need to have a running Neo4j instance. One option is to create a free Neo4j database instance in their Aura cloud service. You can also run the database locally using the Neo4j Desktop application, or running a docker container. You can run a local docker container by running the executing the following script: docker run \ --name neo4j \ -p 7474:7474 -p 7687:7687 \ -d \ -e NEO4J_AUTH=neo4j/pleaseletmein \ -e NEO4J_PLUGINS=\[\"apoc\"\] \ neo4j:latest If you are using the docker container, you need to wait a couple of second for the database to start. from langchain.chains import GraphCypherQAChain from langchain_community.graphs import Neo4jGraph from langchain_openai import ChatOpenAI graph = Neo4jGraph( url="bolt://localhost:7687", username="neo4j", password="pleaseletmein" ) Seeding the database​ Assuming your database is empty, you can populate it using Cypher query language. The following Cypher statement is idempotent, which means the database information will be the same if you run it one or multiple times. graph.query( """ MERGE (m:Movie {name:"Top Gun"}) WITH m UNWIND ["Tom Cruise", "Val Kilmer", "Anthony Edwards", "Meg Ryan"] AS actor MERGE (a:Actor {name:actor}) MERGE (a)-[:ACTED_IN]->(m) """ ) Refresh graph schema information​ If the schema of database changes, you can refresh the schema information needed to generate Cypher statements. Node properties are the following: Movie {name: STRING},Actor {name: STRING} Relationship properties are the following: The relationships are the following: (:Actor)-[:ACTED_IN]->(:Movie) Querying the graph​ We can now use the graph cypher QA chain to ask question of the graph chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True ) chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}] > Finished chain. 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.' Limit the number of results​ You can limit the number of results from the Cypher QA Chain using the top_k parameter. The default is 10. chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, top_k=2 ) chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}] > Finished chain. 'Tom Cruise and Val Kilmer played in Top Gun.' You can return intermediate steps from the Cypher QA Chain using the return_intermediate_steps parameter chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_intermediate_steps=True ) result = chain("Who played in Top Gun?") print(f"Intermediate steps: {result['intermediate_steps']}") print(f"Final answer: {result['result']}") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}] > Finished chain. Intermediate steps: [{'query': "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\nRETURN a.name"}, {'context': [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]}] Final answer: Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun. Return direct results​ You can return direct results from the Cypher QA Chain using the return_direct parameter chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_direct=True ) chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name > Finished chain. [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}] Add examples in the Cypher generation prompt​ You can define the Cypher statement you want the LLM to generate for particular questions from langchain_core.prompts.prompt import PromptTemplate CYPHER_GENERATION_TEMPLATE = """Task:Generate Cypher statement to query a graph database. Instructions: Use only the provided relationship types and properties in the schema. Do not use any other relationship types or properties that are not provided. Schema: {schema} Note: Do not include any explanations or apologies in your responses. Do not respond to any questions that might ask anything else than for you to construct a Cypher statement. Do not include any text except the generated Cypher statement. Examples: Here are a few examples of generated Cypher statements for particular questions: # How many people played in Top Gun? MATCH (m:Movie {{title:"Top Gun"}})<-[:ACTED_IN]-() RETURN count(*) AS numberOfActors The question is: {question}""" CYPHER_GENERATION_PROMPT = PromptTemplate( input_variables=["schema", "question"], template=CYPHER_GENERATION_TEMPLATE ) chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, cypher_prompt=CYPHER_GENERATION_PROMPT, ) chain.run("How many people played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (m:Movie {name:"Top Gun"})<-[:ACTED_IN]-(:Actor) RETURN count(*) AS numberOfActors Full Context: [{'numberOfActors': 4}] > Finished chain. 'Four people played in Top Gun.' Use separate LLMs for Cypher and answer generation​ You can use the cypher_llm and qa_llm parameters to define different llms chain = GraphCypherQAChain.from_llm( graph=graph, cypher_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), qa_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"), verbose=True, ) chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}] > Finished chain. 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.' Ignore specified node and relationship types​ You can use include_types or exclude_types to ignore parts of the graph schema when generating Cypher statements. chain = GraphCypherQAChain.from_llm( graph=graph, cypher_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), qa_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"), verbose=True, exclude_types=["Movie"], ) # Inspect graph schema print(chain.graph_schema) Node properties are the following: Actor {name: STRING} Relationship properties are the following: The relationships are the following: Validate generated Cypher statements​ You can use the validate_cypher parameter to validate and correct relationship directions in generated Cypher statements chain = GraphCypherQAChain.from_llm( llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), graph=graph, verbose=True, validate_cypher=True, ) chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}] > Finished chain. 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'
https://python.langchain.com/docs/integrations/graphs/apache_age/
## Apache AGE > [Apache AGE](https://age.apache.org/) is a PostgreSQL extension that provides graph database functionality. AGE is an acronym for A Graph Extension, and is inspired by Bitnine’s fork of PostgreSQL 10, AgensGraph, which is a multi-model database. The goal of the project is to create single storage that can handle both relational and graph model data so that users can use standard ANSI SQL along with openCypher, the Graph query language. The data elements `Apache AGE` stores are nodes, edges connecting them, and attributes of nodes and edges. > This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the `Cypher` query language. > [Cypher](https://en.wikipedia.org/wiki/Cypher_(query_language)) is a declarative graph query language that allows for expressive and efficient data querying in a property graph. ## Settin up[​](#settin-up "Direct link to Settin up") You will need to have a running `Postgre` instance with the AGE extension installed. One option for testing is to run a docker container using the official AGE docker image. You can run a local docker container by running the executing the following script: ``` docker run \ --name age \ -p 5432:5432 \ -e POSTGRES_USER=postgresUser \ -e POSTGRES_PASSWORD=postgresPW \ -e POSTGRES_DB=postgresDB \ -d \ apache/age ``` Additional instructions on running in docker can be found [here](https://hub.docker.com/r/apache/age). ``` from langchain.chains import GraphCypherQAChainfrom langchain_community.graphs.age_graph import AGEGraphfrom langchain_openai import ChatOpenAI ``` ``` conf = { "database": "postgresDB", "user": "postgresUser", "password": "postgresPW", "host": "localhost", "port": 5432,}graph = AGEGraph(graph_name="age_test", conf=conf) ``` ## Seeding the database[​](#seeding-the-database "Direct link to Seeding the database") Assuming your database is empty, you can populate it using Cypher query language. The following Cypher statement is idempotent, which means the database information will be the same if you run it one or multiple times. ``` graph.query( """MERGE (m:Movie {name:"Top Gun"})WITH mUNWIND ["Tom Cruise", "Val Kilmer", "Anthony Edwards", "Meg Ryan"] AS actorMERGE (a:Actor {name:actor})MERGE (a)-[:ACTED_IN]->(m)""") ``` ## Refresh graph schema information[​](#refresh-graph-schema-information "Direct link to Refresh graph schema information") If the schema of database changes, you can refresh the schema information needed to generate Cypher statements. ``` Node properties are the following: [{'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Actor'}, {'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [], 'labels': 'LabelB'}, {'properties': [], 'labels': 'LabelC'}, {'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Movie'}] Relationship properties are the following: [{'properties': [], 'type': 'ACTED_IN'}, {'properties': [{'property': 'rel_prop', 'type': 'STRING'}], 'type': 'REL_TYPE'}] The relationships are the following: ['(:`Actor`)-[:`ACTED_IN`]->(:`Movie`)', '(:`LabelA`)-[:`REL_TYPE`]->(:`LabelB`)', '(:`LabelA`)-[:`REL_TYPE`]->(:`LabelC`)'] ``` ## Querying the graph[​](#querying-the-graph "Direct link to Querying the graph") We can now use the graph cypher QA chain to ask question of the graph ``` chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True) ``` ``` chain.invoke("Who played in Top Gun?") ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)WHERE m.name = 'Top Gun'RETURN a.nameFull Context:[{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}, {'name': 'Anthony Edwards'}, {'name': 'Meg Ryan'}]> Finished chain. ``` ``` {'query': 'Who played in Top Gun?', 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, Meg Ryan played in Top Gun.'} ``` ## Limit the number of results[​](#limit-the-number-of-results "Direct link to Limit the number of results") You can limit the number of results from the Cypher QA Chain using the `top_k` parameter. The default is 10. ``` chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, top_k=2) ``` ``` chain.invoke("Who played in Top Gun?") ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})RETURN a.nameFull Context:[{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}]> Finished chain. ``` ``` {'query': 'Who played in Top Gun?', 'result': 'Tom Cruise, Val Kilmer played in Top Gun.'} ``` You can return intermediate steps from the Cypher QA Chain using the `return_intermediate_steps` parameter ``` chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_intermediate_steps=True) ``` ``` result = chain("Who played in Top Gun?")print(f"Intermediate steps: {result['intermediate_steps']}")print(f"Final answer: {result['result']}") ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)WHERE m.name = 'Top Gun'RETURN a.nameFull Context:[{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}, {'name': 'Anthony Edwards'}, {'name': 'Meg Ryan'}]> Finished chain.Intermediate steps: [{'query': "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\nWHERE m.name = 'Top Gun'\nRETURN a.name"}, {'context': [{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}, {'name': 'Anthony Edwards'}, {'name': 'Meg Ryan'}]}]Final answer: Tom Cruise, Val Kilmer, Anthony Edwards, Meg Ryan played in Top Gun. ``` ## Return direct results[​](#return-direct-results "Direct link to Return direct results") You can return direct results from the Cypher QA Chain using the `return_direct` parameter ``` chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_direct=True) ``` ``` chain.invoke("Who played in Top Gun?") ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})RETURN a.name> Finished chain. ``` ``` {'query': 'Who played in Top Gun?', 'result': [{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}, {'name': 'Anthony Edwards'}, {'name': 'Meg Ryan'}]} ``` ## Add examples in the Cypher generation prompt[​](#add-examples-in-the-cypher-generation-prompt "Direct link to Add examples in the Cypher generation prompt") You can define the Cypher statement you want the LLM to generate for particular questions ``` from langchain_core.prompts.prompt import PromptTemplateCYPHER_GENERATION_TEMPLATE = """Task:Generate Cypher statement to query a graph database.Instructions:Use only the provided relationship types and properties in the schema.Do not use any other relationship types or properties that are not provided.Schema:{schema}Note: Do not include any explanations or apologies in your responses.Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.Do not include any text except the generated Cypher statement.Examples: Here are a few examples of generated Cypher statements for particular questions:# How many people played in Top Gun?MATCH (m:Movie {{title:"Top Gun"}})<-[:ACTED_IN]-()RETURN count(*) AS numberOfActorsThe question is:{question}"""CYPHER_GENERATION_PROMPT = PromptTemplate( input_variables=["schema", "question"], template=CYPHER_GENERATION_TEMPLATE)chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, cypher_prompt=CYPHER_GENERATION_PROMPT,) ``` ``` chain.invoke("How many people played in Top Gun?") ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (:Movie {name:"Top Gun"})<-[:ACTED_IN]-(:Actor)RETURN count(*) AS numberOfActorsFull Context:[{'numberofactors': 4}]> Finished chain. ``` ``` {'query': 'How many people played in Top Gun?', 'result': "I don't know the answer."} ``` ## Use separate LLMs for Cypher and answer generation[​](#use-separate-llms-for-cypher-and-answer-generation "Direct link to Use separate LLMs for Cypher and answer generation") You can use the `cypher_llm` and `qa_llm` parameters to define different llms ``` chain = GraphCypherQAChain.from_llm( graph=graph, cypher_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), qa_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"), verbose=True,) ``` ``` chain.invoke("Who played in Top Gun?") ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)WHERE m.name = 'Top Gun'RETURN a.nameFull Context:[{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}, {'name': 'Anthony Edwards'}, {'name': 'Meg Ryan'}]> Finished chain. ``` ``` {'query': 'Who played in Top Gun?', 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'} ``` ## Ignore specified node and relationship types[​](#ignore-specified-node-and-relationship-types "Direct link to Ignore specified node and relationship types") You can use `include_types` or `exclude_types` to ignore parts of the graph schema when generating Cypher statements. ``` chain = GraphCypherQAChain.from_llm( graph=graph, cypher_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), qa_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"), verbose=True, exclude_types=["Movie"],) ``` ``` # Inspect graph schemaprint(chain.graph_schema) ``` ``` Node properties are the following:Actor {name: STRING},LabelA {property_a: STRING},LabelB {},LabelC {}Relationship properties are the following:ACTED_IN {},REL_TYPE {rel_prop: STRING}The relationships are the following:(:LabelA)-[:REL_TYPE]->(:LabelB),(:LabelA)-[:REL_TYPE]->(:LabelC) ``` ## Validate generated Cypher statements[​](#validate-generated-cypher-statements "Direct link to Validate generated Cypher statements") You can use the `validate_cypher` parameter to validate and correct relationship directions in generated Cypher statements ``` chain = GraphCypherQAChain.from_llm( llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), graph=graph, verbose=True, validate_cypher=True,) ``` ``` chain.invoke("Who played in Top Gun?") ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)WHERE m.name = 'Top Gun'RETURN a.nameFull Context:[{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}, {'name': 'Anthony Edwards'}, {'name': 'Meg Ryan'}]> Finished chain. ``` ``` {'query': 'Who played in Top Gun?', 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, Meg Ryan played in Top Gun.'} ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:55.001Z", "loadedUrl": "https://python.langchain.com/docs/integrations/graphs/apache_age/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/graphs/apache_age/", "description": "Apache AGE is a PostgreSQL extension that", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3485", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"apache_age\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:53 GMT", "etag": "W/\"c4a09de0ccf549ac288229af65df612d\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::w5r7l-1713753593638-768e814e82d2" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/graphs/apache_age/", "property": "og:url" }, { "content": "Apache AGE | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Apache AGE is a PostgreSQL extension that", "property": "og:description" } ], "title": "Apache AGE | 🦜️🔗 LangChain" }
Apache AGE Apache AGE is a PostgreSQL extension that provides graph database functionality. AGE is an acronym for A Graph Extension, and is inspired by Bitnine’s fork of PostgreSQL 10, AgensGraph, which is a multi-model database. The goal of the project is to create single storage that can handle both relational and graph model data so that users can use standard ANSI SQL along with openCypher, the Graph query language. The data elements Apache AGE stores are nodes, edges connecting them, and attributes of nodes and edges. This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language. Cypher is a declarative graph query language that allows for expressive and efficient data querying in a property graph. Settin up​ You will need to have a running Postgre instance with the AGE extension installed. One option for testing is to run a docker container using the official AGE docker image. You can run a local docker container by running the executing the following script: docker run \ --name age \ -p 5432:5432 \ -e POSTGRES_USER=postgresUser \ -e POSTGRES_PASSWORD=postgresPW \ -e POSTGRES_DB=postgresDB \ -d \ apache/age Additional instructions on running in docker can be found here. from langchain.chains import GraphCypherQAChain from langchain_community.graphs.age_graph import AGEGraph from langchain_openai import ChatOpenAI conf = { "database": "postgresDB", "user": "postgresUser", "password": "postgresPW", "host": "localhost", "port": 5432, } graph = AGEGraph(graph_name="age_test", conf=conf) Seeding the database​ Assuming your database is empty, you can populate it using Cypher query language. The following Cypher statement is idempotent, which means the database information will be the same if you run it one or multiple times. graph.query( """ MERGE (m:Movie {name:"Top Gun"}) WITH m UNWIND ["Tom Cruise", "Val Kilmer", "Anthony Edwards", "Meg Ryan"] AS actor MERGE (a:Actor {name:actor}) MERGE (a)-[:ACTED_IN]->(m) """ ) Refresh graph schema information​ If the schema of database changes, you can refresh the schema information needed to generate Cypher statements. Node properties are the following: [{'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Actor'}, {'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [], 'labels': 'LabelB'}, {'properties': [], 'labels': 'LabelC'}, {'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Movie'}] Relationship properties are the following: [{'properties': [], 'type': 'ACTED_IN'}, {'properties': [{'property': 'rel_prop', 'type': 'STRING'}], 'type': 'REL_TYPE'}] The relationships are the following: ['(:`Actor`)-[:`ACTED_IN`]->(:`Movie`)', '(:`LabelA`)-[:`REL_TYPE`]->(:`LabelB`)', '(:`LabelA`)-[:`REL_TYPE`]->(:`LabelC`)'] Querying the graph​ We can now use the graph cypher QA chain to ask question of the graph chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True ) chain.invoke("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) WHERE m.name = 'Top Gun' RETURN a.name Full Context: [{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}, {'name': 'Anthony Edwards'}, {'name': 'Meg Ryan'}] > Finished chain. {'query': 'Who played in Top Gun?', 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, Meg Ryan played in Top Gun.'} Limit the number of results​ You can limit the number of results from the Cypher QA Chain using the top_k parameter. The default is 10. chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, top_k=2 ) chain.invoke("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}] > Finished chain. {'query': 'Who played in Top Gun?', 'result': 'Tom Cruise, Val Kilmer played in Top Gun.'} You can return intermediate steps from the Cypher QA Chain using the return_intermediate_steps parameter chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_intermediate_steps=True ) result = chain("Who played in Top Gun?") print(f"Intermediate steps: {result['intermediate_steps']}") print(f"Final answer: {result['result']}") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) WHERE m.name = 'Top Gun' RETURN a.name Full Context: [{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}, {'name': 'Anthony Edwards'}, {'name': 'Meg Ryan'}] > Finished chain. Intermediate steps: [{'query': "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\nWHERE m.name = 'Top Gun'\nRETURN a.name"}, {'context': [{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}, {'name': 'Anthony Edwards'}, {'name': 'Meg Ryan'}]}] Final answer: Tom Cruise, Val Kilmer, Anthony Edwards, Meg Ryan played in Top Gun. Return direct results​ You can return direct results from the Cypher QA Chain using the return_direct parameter chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_direct=True ) chain.invoke("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name > Finished chain. {'query': 'Who played in Top Gun?', 'result': [{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}, {'name': 'Anthony Edwards'}, {'name': 'Meg Ryan'}]} Add examples in the Cypher generation prompt​ You can define the Cypher statement you want the LLM to generate for particular questions from langchain_core.prompts.prompt import PromptTemplate CYPHER_GENERATION_TEMPLATE = """Task:Generate Cypher statement to query a graph database. Instructions: Use only the provided relationship types and properties in the schema. Do not use any other relationship types or properties that are not provided. Schema: {schema} Note: Do not include any explanations or apologies in your responses. Do not respond to any questions that might ask anything else than for you to construct a Cypher statement. Do not include any text except the generated Cypher statement. Examples: Here are a few examples of generated Cypher statements for particular questions: # How many people played in Top Gun? MATCH (m:Movie {{title:"Top Gun"}})<-[:ACTED_IN]-() RETURN count(*) AS numberOfActors The question is: {question}""" CYPHER_GENERATION_PROMPT = PromptTemplate( input_variables=["schema", "question"], template=CYPHER_GENERATION_TEMPLATE ) chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, cypher_prompt=CYPHER_GENERATION_PROMPT, ) chain.invoke("How many people played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (:Movie {name:"Top Gun"})<-[:ACTED_IN]-(:Actor) RETURN count(*) AS numberOfActors Full Context: [{'numberofactors': 4}] > Finished chain. {'query': 'How many people played in Top Gun?', 'result': "I don't know the answer."} Use separate LLMs for Cypher and answer generation​ You can use the cypher_llm and qa_llm parameters to define different llms chain = GraphCypherQAChain.from_llm( graph=graph, cypher_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), qa_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"), verbose=True, ) chain.invoke("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) WHERE m.name = 'Top Gun' RETURN a.name Full Context: [{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}, {'name': 'Anthony Edwards'}, {'name': 'Meg Ryan'}] > Finished chain. {'query': 'Who played in Top Gun?', 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'} Ignore specified node and relationship types​ You can use include_types or exclude_types to ignore parts of the graph schema when generating Cypher statements. chain = GraphCypherQAChain.from_llm( graph=graph, cypher_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), qa_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"), verbose=True, exclude_types=["Movie"], ) # Inspect graph schema print(chain.graph_schema) Node properties are the following: Actor {name: STRING},LabelA {property_a: STRING},LabelB {},LabelC {} Relationship properties are the following: ACTED_IN {},REL_TYPE {rel_prop: STRING} The relationships are the following: (:LabelA)-[:REL_TYPE]->(:LabelB),(:LabelA)-[:REL_TYPE]->(:LabelC) Validate generated Cypher statements​ You can use the validate_cypher parameter to validate and correct relationship directions in generated Cypher statements chain = GraphCypherQAChain.from_llm( llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), graph=graph, verbose=True, validate_cypher=True, ) chain.invoke("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) WHERE m.name = 'Top Gun' RETURN a.name Full Context: [{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}, {'name': 'Anthony Edwards'}, {'name': 'Meg Ryan'}] > Finished chain. {'query': 'Who played in Top Gun?', 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, Meg Ryan played in Top Gun.'}
https://python.langchain.com/docs/integrations/graphs/
[ ## 📄️ Azure Cosmos DB for Apache Gremlin \[Azure Cosmos DB for Apache ](https://python.langchain.com/docs/integrations/graphs/azure_cosmosdb_gremlin/)
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:55.370Z", "loadedUrl": "https://python.langchain.com/docs/integrations/graphs/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/graphs/", "description": null, "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4596", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"graphs\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:53 GMT", "etag": "W/\"57d02799e5e81b19112327571599162e\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::pzcg6-1713753593958-70457a7cf883" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/graphs/", "property": "og:url" }, { "content": "Graphs | 🦜️🔗 LangChain", "property": "og:title" } ], "title": "Graphs | 🦜️🔗 LangChain" }
📄️ Azure Cosmos DB for Apache Gremlin [Azure Cosmos DB for Apache
https://python.langchain.com/docs/integrations/graphs/networkx/
## NetworkX > [NetworkX](https://networkx.org/) is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. This notebook goes over how to do question answering over a graph data structure. ## Setting up[​](#setting-up "Direct link to Setting up") We have to install a Python package. ``` %pip install --upgrade --quiet networkx ``` ## Create the graph[​](#create-the-graph "Direct link to Create the graph") In this section, we construct an example graph. At the moment, this works best for small pieces of text. ``` from langchain.indexes import GraphIndexCreatorfrom langchain_openai import OpenAI ``` ``` index_creator = GraphIndexCreator(llm=OpenAI(temperature=0)) ``` ``` with open("../../../modules/state_of_the_union.txt") as f: all_text = f.read() ``` We will use just a small snippet, because extracting the knowledge triplets is a bit intensive at the moment. ``` text = "\n".join(all_text.split("\n\n")[105:108]) ``` ``` 'It won’t look like much, but if you stop and look closely, you’ll see a “Field of dreams,” the ground on which America’s future will be built. \nThis is where Intel, the American company that helped build Silicon Valley, is going to build its $20 billion semiconductor “mega site”. \nUp to eight state-of-the-art factories in one place. 10,000 new good-paying jobs. ' ``` ``` graph = index_creator.from_text(text) ``` We can inspect the created graph. ``` [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground on which')] ``` ## Querying the graph[​](#querying-the-graph "Direct link to Querying the graph") We can now use the graph QA chain to ask question of the graph ``` from langchain.chains import GraphQAChain ``` ``` chain = GraphQAChain.from_llm(OpenAI(temperature=0), graph=graph, verbose=True) ``` ``` chain.run("what is Intel going to build?") ``` ``` > Entering new GraphQAChain chain...Entities Extracted: IntelFull Context:Intel is going to build $20 billion semiconductor "mega site"Intel is building state-of-the-art factoriesIntel is creating 10,000 new good-paying jobsIntel is helping build Silicon Valley> Finished chain. ``` ``` ' Intel is going to build a $20 billion semiconductor "mega site" with state-of-the-art factories, creating 10,000 new good-paying jobs and helping to build Silicon Valley.' ``` ## Save the graph[​](#save-the-graph "Direct link to Save the graph") We can also save and load the graph. ``` graph.write_to_gml("graph.gml") ``` ``` from langchain.indexes.graph import NetworkxEntityGraph ``` ``` loaded_graph = NetworkxEntityGraph.from_gml("graph.gml") ``` ``` loaded_graph.get_triples() ``` ``` [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground on which')] ``` ``` loaded_graph.get_number_of_nodes() ``` ``` loaded_graph.add_node("NewNode") ``` ``` loaded_graph.has_node("NewNode") ``` ``` loaded_graph.remove_node("NewNode") ``` ``` loaded_graph.get_neighbors("Intel") ``` ``` loaded_graph.has_edge("Intel", "Silicon Valley") ``` ``` loaded_graph.remove_edge("Intel", "Silicon Valley") ``` ``` loaded_graph.clear_edges() ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:55.502Z", "loadedUrl": "https://python.langchain.com/docs/integrations/graphs/networkx/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/graphs/networkx/", "description": "NetworkX is a Python package for the", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"networkx\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:54 GMT", "etag": "W/\"e51c519d8fdae9645fec11193c0323a5\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::94r9b-1713753594596-91a21a59efe6" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/graphs/networkx/", "property": "og:url" }, { "content": "NetworkX | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "NetworkX is a Python package for the", "property": "og:description" } ], "title": "NetworkX | 🦜️🔗 LangChain" }
NetworkX NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. This notebook goes over how to do question answering over a graph data structure. Setting up​ We have to install a Python package. %pip install --upgrade --quiet networkx Create the graph​ In this section, we construct an example graph. At the moment, this works best for small pieces of text. from langchain.indexes import GraphIndexCreator from langchain_openai import OpenAI index_creator = GraphIndexCreator(llm=OpenAI(temperature=0)) with open("../../../modules/state_of_the_union.txt") as f: all_text = f.read() We will use just a small snippet, because extracting the knowledge triplets is a bit intensive at the moment. text = "\n".join(all_text.split("\n\n")[105:108]) 'It won’t look like much, but if you stop and look closely, you’ll see a “Field of dreams,” the ground on which America’s future will be built. \nThis is where Intel, the American company that helped build Silicon Valley, is going to build its $20 billion semiconductor “mega site”. \nUp to eight state-of-the-art factories in one place. 10,000 new good-paying jobs. ' graph = index_creator.from_text(text) We can inspect the created graph. [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground on which')] Querying the graph​ We can now use the graph QA chain to ask question of the graph from langchain.chains import GraphQAChain chain = GraphQAChain.from_llm(OpenAI(temperature=0), graph=graph, verbose=True) chain.run("what is Intel going to build?") > Entering new GraphQAChain chain... Entities Extracted: Intel Full Context: Intel is going to build $20 billion semiconductor "mega site" Intel is building state-of-the-art factories Intel is creating 10,000 new good-paying jobs Intel is helping build Silicon Valley > Finished chain. ' Intel is going to build a $20 billion semiconductor "mega site" with state-of-the-art factories, creating 10,000 new good-paying jobs and helping to build Silicon Valley.' Save the graph​ We can also save and load the graph. graph.write_to_gml("graph.gml") from langchain.indexes.graph import NetworkxEntityGraph loaded_graph = NetworkxEntityGraph.from_gml("graph.gml") loaded_graph.get_triples() [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground on which')] loaded_graph.get_number_of_nodes() loaded_graph.add_node("NewNode") loaded_graph.has_node("NewNode") loaded_graph.remove_node("NewNode") loaded_graph.get_neighbors("Intel") loaded_graph.has_edge("Intel", "Silicon Valley") loaded_graph.remove_edge("Intel", "Silicon Valley") loaded_graph.clear_edges()
https://python.langchain.com/docs/integrations/graphs/ontotext/
## Ontotext GraphDB > [Ontotext GraphDB](https://graphdb.ontotext.com/) is a graph database and knowledge discovery tool compliant with [RDF](https://www.w3.org/RDF/) and [SPARQL](https://www.w3.org/TR/sparql11-query/). > This notebook shows how to use LLMs to provide natural language querying (NLQ to SPARQL, also called `text2sparql`) for `Ontotext GraphDB`. ## GraphDB LLM Functionalities[​](#graphdb-llm-functionalities "Direct link to GraphDB LLM Functionalities") `GraphDB` supports some LLM integration functionalities as described [here](https://github.com/w3c/sparql-dev/issues/193): [gpt-queries](https://graphdb.ontotext.com/documentation/10.5/gpt-queries.html) * magic predicates to ask an LLM for text, list or table using data from your knowledge graph (KG) * query explanation * result explanation, summarization, rephrasing, translation [retrieval-graphdb-connector](https://graphdb.ontotext.com/documentation/10.5/retrieval-graphdb-connector.html) * Indexing of KG entities in a vector database * Supports any text embedding algorithm and vector database * Uses the same powerful connector (indexing) language that GraphDB uses for Elastic, Solr, Lucene * Automatic synchronization of changes in RDF data to the KG entity index * Supports nested objects (no UI support in GraphDB version 10.5) * Serializes KG entities to text like this (e.g. for a Wines dataset): ``` Franvino:- is a RedWine.- made from grape Merlo.- made from grape Cabernet Franc.- has sugar dry.- has year 2012. ``` [talk-to-graph](https://graphdb.ontotext.com/documentation/10.5/talk-to-graph.html) * A simple chatbot using a defined KG entity index For this tutorial, we won’t use the GraphDB LLM integration, but `SPARQL` generation from NLQ. We’ll use the `Star Wars API` (`SWAPI`) ontology and dataset that you can examine [here](https://github.com/Ontotext-AD/langchain-graphdb-qa-chain-demo/blob/main/starwars-data.trig). ## Setting up[​](#setting-up "Direct link to Setting up") You need a running GraphDB instance. This tutorial shows how to run the database locally using the [GraphDB Docker image](https://hub.docker.com/r/ontotext/graphdb). It provides a docker compose set-up, which populates GraphDB with the Star Wars dataset. All necessary files including this notebook can be downloaded from [the GitHub repository langchain-graphdb-qa-chain-demo](https://github.com/Ontotext-AD/langchain-graphdb-qa-chain-demo). * Install [Docker](https://docs.docker.com/get-docker/). This tutorial is created using Docker version `24.0.7` which bundles [Docker Compose](https://docs.docker.com/compose/). For earlier Docker versions you may need to install Docker Compose separately. * Clone [the GitHub repository langchain-graphdb-qa-chain-demo](https://github.com/Ontotext-AD/langchain-graphdb-qa-chain-demo) in a local folder on your machine. * Start GraphDB with the following script executed from the same folder ``` docker build --tag graphdb .docker compose up -d graphdb ``` You need to wait a couple of seconds for the database to start on `http://localhost:7200/`. The Star Wars dataset `starwars-data.trig` is automatically loaded into the `langchain` repository. The local SPARQL endpoint `http://localhost:7200/repositories/langchain` can be used to run queries against. You can also open the GraphDB Workbench from your favourite web browser `http://localhost:7200/sparql` where you can make queries interactively. \* Set up working environment If you use `conda`, create and activate a new conda env (e.g. `conda create -n graph_ontotext_graphdb_qa python=3.9.18`). Install the following libraries: ``` pip install jupyter==1.0.0pip install openai==1.6.1pip install rdflib==7.0.0pip install langchain-openai==0.0.2pip install langchain>=0.1.5 ``` Run Jupyter with ## Specifying the ontology[​](#specifying-the-ontology "Direct link to Specifying the ontology") In order for the LLM to be able to generate SPARQL, it needs to know the knowledge graph schema (the ontology). It can be provided using one of two parameters on the `OntotextGraphDBGraph` class: * `query_ontology`: a `CONSTRUCT` query that is executed on the SPARQL endpoint and returns the KG schema statements. We recommend that you store the ontology in its own named graph, which will make it easier to get only the relevant statements (as the example below). `DESCRIBE` queries are not supported, because `DESCRIBE` returns the Symmetric Concise Bounded Description (SCBD), i.e. also the incoming class links. In case of large graphs with a million of instances, this is not efficient. Check [https://github.com/eclipse-rdf4j/rdf4j/issues/4857](https://github.com/eclipse-rdf4j/rdf4j/issues/4857) * `local_file`: a local RDF ontology file. Supported RDF formats are `Turtle`, `RDF/XML`, `JSON-LD`, `N-Triples`, `Notation-3`, `Trig`, `Trix`, `N-Quads`. In either case, the ontology dump should: * Include enough information about classes, properties, property attachment to classes (using rdfs:domain, schema:domainIncludes or OWL restrictions), and taxonomies (important individuals). * Not include overly verbose and irrelevant definitions and examples that do not help SPARQL construction. ``` from langchain_community.graphs import OntotextGraphDBGraph# feeding the schema using a user construct querygraph = OntotextGraphDBGraph( query_endpoint="http://localhost:7200/repositories/langchain", query_ontology="CONSTRUCT {?s ?p ?o} FROM <https://swapi.co/ontology/> WHERE {?s ?p ?o}",) ``` ``` # feeding the schema using a local RDF filegraph = OntotextGraphDBGraph( query_endpoint="http://localhost:7200/repositories/langchain", local_file="/path/to/langchain_graphdb_tutorial/starwars-ontology.nt", # change the path here) ``` Either way, the ontology (schema) is fed to the LLM as `Turtle` since `Turtle` with appropriate prefixes is most compact and easiest for the LLM to remember. The Star Wars ontology is a bit unusual in that it includes a lot of specific triples about classes, e.g. that the species `:Aleena` live on `<planet/38>`, they are a subclass of `:Reptile`, have certain typical characteristics (average height, average lifespan, skinColor), and specific individuals (characters) are representatives of that class: ``` @prefix : <https://swapi.co/vocabulary/> .@prefix owl: <http://www.w3.org/2002/07/owl#> .@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .:Aleena a owl:Class, :Species ; rdfs:label "Aleena" ; rdfs:isDefinedBy <https://swapi.co/ontology/> ; rdfs:subClassOf :Reptile, :Sentient ; :averageHeight 80.0 ; :averageLifespan "79" ; :character <https://swapi.co/resource/aleena/47> ; :film <https://swapi.co/resource/film/4> ; :language "Aleena" ; :planet <https://swapi.co/resource/planet/38> ; :skinColor "blue", "gray" . ... ``` In order to keep this tutorial simple, we use un-secured GraphDB. If GraphDB is secured, you should set the environment variables ‘GRAPHDB\_USERNAME’ and ‘GRAPHDB\_PASSWORD’ before the initialization of `OntotextGraphDBGraph`. ``` os.environ["GRAPHDB_USERNAME"] = "graphdb-user"os.environ["GRAPHDB_PASSWORD"] = "graphdb-password"graph = OntotextGraphDBGraph( query_endpoint=..., query_ontology=...) ``` ## Question Answering against the StarWars dataset[​](#question-answering-against-the-starwars-dataset "Direct link to Question Answering against the StarWars dataset") We can now use the `OntotextGraphDBQAChain` to ask some questions. ``` import osfrom langchain.chains import OntotextGraphDBQAChainfrom langchain_openai import ChatOpenAI# We'll be using an OpenAI model which requires an OpenAI API Key.# However, other models are available as well:# https://python.langchain.com/docs/integrations/chat/# Set the environment variable `OPENAI_API_KEY` to your OpenAI API keyos.environ["OPENAI_API_KEY"] = "sk-***"# Any available OpenAI model can be used here.# We use 'gpt-4-1106-preview' because of the bigger context window.# The 'gpt-4-1106-preview' model_name will deprecate in the future and will change to 'gpt-4-turbo' or similar,# so be sure to consult with the OpenAI API https://platform.openai.com/docs/models for the correct naming.chain = OntotextGraphDBQAChain.from_llm( ChatOpenAI(temperature=0, model_name="gpt-4-1106-preview"), graph=graph, verbose=True,) ``` Let’s ask a simple one. ``` chain.invoke({chain.input_key: "What is the climate on Tatooine?"})[chain.output_key] ``` ``` > Entering new OntotextGraphDBQAChain chain...Generated SPARQL:PREFIX : <https://swapi.co/vocabulary/>PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>SELECT ?climateWHERE { ?planet rdfs:label "Tatooine" ; :climate ?climate .}> Finished chain. ``` ``` 'The climate on Tatooine is arid.' ``` And a bit more complicated one. ``` chain.invoke({chain.input_key: "What is the climate on Luke Skywalker's home planet?"})[ chain.output_key] ``` ``` > Entering new OntotextGraphDBQAChain chain...Generated SPARQL:PREFIX : <https://swapi.co/vocabulary/>PREFIX owl: <http://www.w3.org/2002/07/owl#>PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>SELECT ?climateWHERE { ?character rdfs:label "Luke Skywalker" . ?character :homeworld ?planet . ?planet :climate ?climate .}> Finished chain. ``` ``` "The climate on Luke Skywalker's home planet is arid." ``` We can also ask more complicated questions like ``` chain.invoke( { chain.input_key: "What is the average box office revenue for all the Star Wars movies?" })[chain.output_key] ``` ``` > Entering new OntotextGraphDBQAChain chain...Generated SPARQL:PREFIX : <https://swapi.co/vocabulary/>PREFIX owl: <http://www.w3.org/2002/07/owl#>PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>SELECT (AVG(?boxOffice) AS ?averageBoxOffice)WHERE { ?film a :Film . ?film :boxOffice ?boxOfficeValue . BIND(xsd:decimal(?boxOfficeValue) AS ?boxOffice)}> Finished chain. ``` ``` 'The average box office revenue for all the Star Wars movies is approximately 754.1 million dollars.' ``` ## Chain modifiers[​](#chain-modifiers "Direct link to Chain modifiers") The Ontotext GraphDB QA chain allows prompt refinement for further improvement of your QA chain and enhancing the overall user experience of your app. ### “SPARQL Generation” prompt[​](#sparql-generation-prompt "Direct link to “SPARQL Generation” prompt") The prompt is used for the SPARQL query generation based on the user question and the KG schema. * `sparql_generation_prompt` Default value: ``` GRAPHDB_SPARQL_GENERATION_TEMPLATE = """ Write a SPARQL SELECT query for querying a graph database. The ontology schema delimited by triple backticks in Turtle format is: ``` {schema} ``` Use only the classes and properties provided in the schema to construct the SPARQL query. Do not use any classes or properties that are not explicitly provided in the SPARQL query. Include all necessary prefixes. Do not include any explanations or apologies in your responses. Do not wrap the query in backticks. Do not include any text except the SPARQL query generated. The question delimited by triple backticks is: ``` {prompt} ``` """ GRAPHDB_SPARQL_GENERATION_PROMPT = PromptTemplate( input_variables=["schema", "prompt"], template=GRAPHDB_SPARQL_GENERATION_TEMPLATE, ) ``` ### “SPARQL Fix” prompt[​](#sparql-fix-prompt "Direct link to “SPARQL Fix” prompt") Sometimes, the LLM may generate a SPARQL query with syntactic errors or missing prefixes, etc. The chain will try to amend this by prompting the LLM to correct it a certain number of times. * `sparql_fix_prompt` Default value: ``` GRAPHDB_SPARQL_FIX_TEMPLATE = """ This following SPARQL query delimited by triple backticks ``` {generated_sparql} ``` is not valid. The error delimited by triple backticks is ``` {error_message} ``` Give me a correct version of the SPARQL query. Do not change the logic of the query. Do not include any explanations or apologies in your responses. Do not wrap the query in backticks. Do not include any text except the SPARQL query generated. The ontology schema delimited by triple backticks in Turtle format is: ``` {schema} ``` """ GRAPHDB_SPARQL_FIX_PROMPT = PromptTemplate( input_variables=["error_message", "generated_sparql", "schema"], template=GRAPHDB_SPARQL_FIX_TEMPLATE, ) ``` * `max_fix_retries` Default value: `5` ### “Answering” prompt[​](#answering-prompt "Direct link to “Answering” prompt") The prompt is used for answering the question based on the results returned from the database and the initial user question. By default, the LLM is instructed to only use the information from the returned result(s). If the result set is empty, the LLM should inform that it can’t answer the question. * `qa_prompt` Default value: ``` GRAPHDB_QA_TEMPLATE = """Task: Generate a natural language response from the results of a SPARQL query. You are an assistant that creates well-written and human understandable answers. The information part contains the information provided, which you can use to construct an answer. The information provided is authoritative, you must never doubt it or try to use your internal knowledge to correct it. Make your response sound like the information is coming from an AI assistant, but don't add any information. Don't use internal knowledge to answer the question, just say you don't know if no information is available. Information: {context} Question: {prompt} Helpful Answer:""" GRAPHDB_QA_PROMPT = PromptTemplate( input_variables=["context", "prompt"], template=GRAPHDB_QA_TEMPLATE ) ``` Once you’re finished playing with QA with GraphDB, you can shut down the Docker environment by running `docker compose down -v --remove-orphans` from the directory with the Docker compose file.
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:55.893Z", "loadedUrl": "https://python.langchain.com/docs/integrations/graphs/ontotext/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/graphs/ontotext/", "description": "Ontotext GraphDB is a graph database", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3486", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"ontotext\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:55 GMT", "etag": "W/\"2f5394111ddcc1226bdb1cefeeab15ff\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::vlt2t-1713753595812-04dd63779ae0" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/graphs/ontotext/", "property": "og:url" }, { "content": "Ontotext GraphDB | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Ontotext GraphDB is a graph database", "property": "og:description" } ], "title": "Ontotext GraphDB | 🦜️🔗 LangChain" }
Ontotext GraphDB Ontotext GraphDB is a graph database and knowledge discovery tool compliant with RDF and SPARQL. This notebook shows how to use LLMs to provide natural language querying (NLQ to SPARQL, also called text2sparql) for Ontotext GraphDB. GraphDB LLM Functionalities​ GraphDB supports some LLM integration functionalities as described here: gpt-queries magic predicates to ask an LLM for text, list or table using data from your knowledge graph (KG) query explanation result explanation, summarization, rephrasing, translation retrieval-graphdb-connector Indexing of KG entities in a vector database Supports any text embedding algorithm and vector database Uses the same powerful connector (indexing) language that GraphDB uses for Elastic, Solr, Lucene Automatic synchronization of changes in RDF data to the KG entity index Supports nested objects (no UI support in GraphDB version 10.5) Serializes KG entities to text like this (e.g. for a Wines dataset): Franvino: - is a RedWine. - made from grape Merlo. - made from grape Cabernet Franc. - has sugar dry. - has year 2012. talk-to-graph A simple chatbot using a defined KG entity index For this tutorial, we won’t use the GraphDB LLM integration, but SPARQL generation from NLQ. We’ll use the Star Wars API (SWAPI) ontology and dataset that you can examine here. Setting up​ You need a running GraphDB instance. This tutorial shows how to run the database locally using the GraphDB Docker image. It provides a docker compose set-up, which populates GraphDB with the Star Wars dataset. All necessary files including this notebook can be downloaded from the GitHub repository langchain-graphdb-qa-chain-demo. Install Docker. This tutorial is created using Docker version 24.0.7 which bundles Docker Compose. For earlier Docker versions you may need to install Docker Compose separately. Clone the GitHub repository langchain-graphdb-qa-chain-demo in a local folder on your machine. Start GraphDB with the following script executed from the same folder docker build --tag graphdb . docker compose up -d graphdb You need to wait a couple of seconds for the database to start on http://localhost:7200/. The Star Wars dataset starwars-data.trig is automatically loaded into the langchain repository. The local SPARQL endpoint http://localhost:7200/repositories/langchain can be used to run queries against. You can also open the GraphDB Workbench from your favourite web browser http://localhost:7200/sparql where you can make queries interactively. * Set up working environment If you use conda, create and activate a new conda env (e.g. conda create -n graph_ontotext_graphdb_qa python=3.9.18). Install the following libraries: pip install jupyter==1.0.0 pip install openai==1.6.1 pip install rdflib==7.0.0 pip install langchain-openai==0.0.2 pip install langchain>=0.1.5 Run Jupyter with Specifying the ontology​ In order for the LLM to be able to generate SPARQL, it needs to know the knowledge graph schema (the ontology). It can be provided using one of two parameters on the OntotextGraphDBGraph class: query_ontology: a CONSTRUCT query that is executed on the SPARQL endpoint and returns the KG schema statements. We recommend that you store the ontology in its own named graph, which will make it easier to get only the relevant statements (as the example below). DESCRIBE queries are not supported, because DESCRIBE returns the Symmetric Concise Bounded Description (SCBD), i.e. also the incoming class links. In case of large graphs with a million of instances, this is not efficient. Check https://github.com/eclipse-rdf4j/rdf4j/issues/4857 local_file: a local RDF ontology file. Supported RDF formats are Turtle, RDF/XML, JSON-LD, N-Triples, Notation-3, Trig, Trix, N-Quads. In either case, the ontology dump should: Include enough information about classes, properties, property attachment to classes (using rdfs:domain, schema:domainIncludes or OWL restrictions), and taxonomies (important individuals). Not include overly verbose and irrelevant definitions and examples that do not help SPARQL construction. from langchain_community.graphs import OntotextGraphDBGraph # feeding the schema using a user construct query graph = OntotextGraphDBGraph( query_endpoint="http://localhost:7200/repositories/langchain", query_ontology="CONSTRUCT {?s ?p ?o} FROM <https://swapi.co/ontology/> WHERE {?s ?p ?o}", ) # feeding the schema using a local RDF file graph = OntotextGraphDBGraph( query_endpoint="http://localhost:7200/repositories/langchain", local_file="/path/to/langchain_graphdb_tutorial/starwars-ontology.nt", # change the path here ) Either way, the ontology (schema) is fed to the LLM as Turtle since Turtle with appropriate prefixes is most compact and easiest for the LLM to remember. The Star Wars ontology is a bit unusual in that it includes a lot of specific triples about classes, e.g. that the species :Aleena live on <planet/38>, they are a subclass of :Reptile, have certain typical characteristics (average height, average lifespan, skinColor), and specific individuals (characters) are representatives of that class: @prefix : <https://swapi.co/vocabulary/> . @prefix owl: <http://www.w3.org/2002/07/owl#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix xsd: <http://www.w3.org/2001/XMLSchema#> . :Aleena a owl:Class, :Species ; rdfs:label "Aleena" ; rdfs:isDefinedBy <https://swapi.co/ontology/> ; rdfs:subClassOf :Reptile, :Sentient ; :averageHeight 80.0 ; :averageLifespan "79" ; :character <https://swapi.co/resource/aleena/47> ; :film <https://swapi.co/resource/film/4> ; :language "Aleena" ; :planet <https://swapi.co/resource/planet/38> ; :skinColor "blue", "gray" . ... In order to keep this tutorial simple, we use un-secured GraphDB. If GraphDB is secured, you should set the environment variables ‘GRAPHDB_USERNAME’ and ‘GRAPHDB_PASSWORD’ before the initialization of OntotextGraphDBGraph. os.environ["GRAPHDB_USERNAME"] = "graphdb-user" os.environ["GRAPHDB_PASSWORD"] = "graphdb-password" graph = OntotextGraphDBGraph( query_endpoint=..., query_ontology=... ) Question Answering against the StarWars dataset​ We can now use the OntotextGraphDBQAChain to ask some questions. import os from langchain.chains import OntotextGraphDBQAChain from langchain_openai import ChatOpenAI # We'll be using an OpenAI model which requires an OpenAI API Key. # However, other models are available as well: # https://python.langchain.com/docs/integrations/chat/ # Set the environment variable `OPENAI_API_KEY` to your OpenAI API key os.environ["OPENAI_API_KEY"] = "sk-***" # Any available OpenAI model can be used here. # We use 'gpt-4-1106-preview' because of the bigger context window. # The 'gpt-4-1106-preview' model_name will deprecate in the future and will change to 'gpt-4-turbo' or similar, # so be sure to consult with the OpenAI API https://platform.openai.com/docs/models for the correct naming. chain = OntotextGraphDBQAChain.from_llm( ChatOpenAI(temperature=0, model_name="gpt-4-1106-preview"), graph=graph, verbose=True, ) Let’s ask a simple one. chain.invoke({chain.input_key: "What is the climate on Tatooine?"})[chain.output_key] > Entering new OntotextGraphDBQAChain chain... Generated SPARQL: PREFIX : <https://swapi.co/vocabulary/> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> SELECT ?climate WHERE { ?planet rdfs:label "Tatooine" ; :climate ?climate . } > Finished chain. 'The climate on Tatooine is arid.' And a bit more complicated one. chain.invoke({chain.input_key: "What is the climate on Luke Skywalker's home planet?"})[ chain.output_key ] > Entering new OntotextGraphDBQAChain chain... Generated SPARQL: PREFIX : <https://swapi.co/vocabulary/> PREFIX owl: <http://www.w3.org/2002/07/owl#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> SELECT ?climate WHERE { ?character rdfs:label "Luke Skywalker" . ?character :homeworld ?planet . ?planet :climate ?climate . } > Finished chain. "The climate on Luke Skywalker's home planet is arid." We can also ask more complicated questions like chain.invoke( { chain.input_key: "What is the average box office revenue for all the Star Wars movies?" } )[chain.output_key] > Entering new OntotextGraphDBQAChain chain... Generated SPARQL: PREFIX : <https://swapi.co/vocabulary/> PREFIX owl: <http://www.w3.org/2002/07/owl#> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> SELECT (AVG(?boxOffice) AS ?averageBoxOffice) WHERE { ?film a :Film . ?film :boxOffice ?boxOfficeValue . BIND(xsd:decimal(?boxOfficeValue) AS ?boxOffice) } > Finished chain. 'The average box office revenue for all the Star Wars movies is approximately 754.1 million dollars.' Chain modifiers​ The Ontotext GraphDB QA chain allows prompt refinement for further improvement of your QA chain and enhancing the overall user experience of your app. “SPARQL Generation” prompt​ The prompt is used for the SPARQL query generation based on the user question and the KG schema. sparql_generation_prompt Default value: GRAPHDB_SPARQL_GENERATION_TEMPLATE = """ Write a SPARQL SELECT query for querying a graph database. The ontology schema delimited by triple backticks in Turtle format is: ``` {schema} ``` Use only the classes and properties provided in the schema to construct the SPARQL query. Do not use any classes or properties that are not explicitly provided in the SPARQL query. Include all necessary prefixes. Do not include any explanations or apologies in your responses. Do not wrap the query in backticks. Do not include any text except the SPARQL query generated. The question delimited by triple backticks is: ``` {prompt} ``` """ GRAPHDB_SPARQL_GENERATION_PROMPT = PromptTemplate( input_variables=["schema", "prompt"], template=GRAPHDB_SPARQL_GENERATION_TEMPLATE, ) “SPARQL Fix” prompt​ Sometimes, the LLM may generate a SPARQL query with syntactic errors or missing prefixes, etc. The chain will try to amend this by prompting the LLM to correct it a certain number of times. sparql_fix_prompt Default value: GRAPHDB_SPARQL_FIX_TEMPLATE = """ This following SPARQL query delimited by triple backticks ``` {generated_sparql} ``` is not valid. The error delimited by triple backticks is ``` {error_message} ``` Give me a correct version of the SPARQL query. Do not change the logic of the query. Do not include any explanations or apologies in your responses. Do not wrap the query in backticks. Do not include any text except the SPARQL query generated. The ontology schema delimited by triple backticks in Turtle format is: ``` {schema} ``` """ GRAPHDB_SPARQL_FIX_PROMPT = PromptTemplate( input_variables=["error_message", "generated_sparql", "schema"], template=GRAPHDB_SPARQL_FIX_TEMPLATE, ) max_fix_retries Default value: 5 “Answering” prompt​ The prompt is used for answering the question based on the results returned from the database and the initial user question. By default, the LLM is instructed to only use the information from the returned result(s). If the result set is empty, the LLM should inform that it can’t answer the question. qa_prompt Default value: GRAPHDB_QA_TEMPLATE = """Task: Generate a natural language response from the results of a SPARQL query. You are an assistant that creates well-written and human understandable answers. The information part contains the information provided, which you can use to construct an answer. The information provided is authoritative, you must never doubt it or try to use your internal knowledge to correct it. Make your response sound like the information is coming from an AI assistant, but don't add any information. Don't use internal knowledge to answer the question, just say you don't know if no information is available. Information: {context} Question: {prompt} Helpful Answer:""" GRAPHDB_QA_PROMPT = PromptTemplate( input_variables=["context", "prompt"], template=GRAPHDB_QA_TEMPLATE ) Once you’re finished playing with QA with GraphDB, you can shut down the Docker environment by running docker compose down -v --remove-orphans from the directory with the Docker compose file.
https://python.langchain.com/docs/integrations/graphs/arangodb/
## ArangoDB [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/arangodb/interactive_tutorials/blob/master/notebooks/Langchain.ipynb) Open In Colab > [ArangoDB](https://github.com/arangodb/arangodb) is a scalable graph database system to drive value from connected data, faster. Native graphs, an integrated search engine, and JSON support, via a single query language. `ArangoDB` runs on-prem or in the cloud. This notebook shows how to use LLMs to provide a natural language interface to an [ArangoDB](https://github.com/arangodb/arangodb#readme) database. ## Setting up[​](#setting-up "Direct link to Setting up") You can get a local `ArangoDB` instance running via the [ArangoDB Docker image](https://hub.docker.com/_/arangodb): ``` docker run -p 8529:8529 -e ARANGO_ROOT_PASSWORD= arangodb/arangodb ``` An alternative is to use the [ArangoDB Cloud Connector package](https://github.com/arangodb/adb-cloud-connector#readme) to get a temporary cloud instance running: ``` %%capture%pip install --upgrade --quiet python-arango # The ArangoDB Python Driver%pip install --upgrade --quiet adb-cloud-connector # The ArangoDB Cloud Instance provisioner%pip install --upgrade --quiet langchain-openai%pip install --upgrade --quiet langchain ``` ``` # Instantiate ArangoDB Databaseimport jsonfrom adb_cloud_connector import get_temp_credentialsfrom arango import ArangoClientcon = get_temp_credentials()db = ArangoClient(hosts=con["url"]).db( con["dbName"], con["username"], con["password"], verify=True)print(json.dumps(con, indent=2)) ``` ``` Log: requesting new credentials...Succcess: new credentials acquired{ "dbName": "TUT3sp29s3pjf1io0h4cfdsq", "username": "TUTo6nkwgzkizej3kysgdyeo8", "password": "TUT9vx0qjqt42i9bq8uik4v9", "hostname": "tutorials.arangodb.cloud", "port": 8529, "url": "https://tutorials.arangodb.cloud:8529"} ``` ``` # Instantiate the ArangoDB-LangChain Graphfrom langchain_community.graphs import ArangoGraphgraph = ArangoGraph(db) ``` ## Populating database[​](#populating-database "Direct link to Populating database") We will rely on the `Python Driver` to import our [GameOfThrones](https://github.com/arangodb/example-datasets/tree/master/GameOfThrones) data into our database. ``` if db.has_graph("GameOfThrones"): db.delete_graph("GameOfThrones", drop_collections=True)db.create_graph( "GameOfThrones", edge_definitions=[ { "edge_collection": "ChildOf", "from_vertex_collections": ["Characters"], "to_vertex_collections": ["Characters"], }, ],)documents = [ { "_key": "NedStark", "name": "Ned", "surname": "Stark", "alive": True, "age": 41, "gender": "male", }, { "_key": "CatelynStark", "name": "Catelyn", "surname": "Stark", "alive": False, "age": 40, "gender": "female", }, { "_key": "AryaStark", "name": "Arya", "surname": "Stark", "alive": True, "age": 11, "gender": "female", }, { "_key": "BranStark", "name": "Bran", "surname": "Stark", "alive": True, "age": 10, "gender": "male", },]edges = [ {"_to": "Characters/NedStark", "_from": "Characters/AryaStark"}, {"_to": "Characters/NedStark", "_from": "Characters/BranStark"}, {"_to": "Characters/CatelynStark", "_from": "Characters/AryaStark"}, {"_to": "Characters/CatelynStark", "_from": "Characters/BranStark"},]db.collection("Characters").import_bulk(documents)db.collection("ChildOf").import_bulk(edges) ``` ``` {'error': False, 'created': 4, 'errors': 0, 'empty': 0, 'updated': 0, 'ignored': 0, 'details': []} ``` ## Getting and setting the ArangoDB schema[​](#getting-and-setting-the-arangodb-schema "Direct link to Getting and setting the ArangoDB schema") An initial `ArangoDB Schema` is generated upon instantiating the `ArangoDBGraph` object. Below are the schema’s getter & setter methods should you be interested in viewing or modifying the schema: ``` # The schema should be empty here,# since `graph` was initialized prior to ArangoDB Data ingestion (see above).import jsonprint(json.dumps(graph.schema, indent=4)) ``` ``` { "Graph Schema": [], "Collection Schema": []} ``` ``` # We can now view the generated schemaimport jsonprint(json.dumps(graph.schema, indent=4)) ``` ``` { "Graph Schema": [ { "graph_name": "GameOfThrones", "edge_definitions": [ { "edge_collection": "ChildOf", "from_vertex_collections": [ "Characters" ], "to_vertex_collections": [ "Characters" ] } ] } ], "Collection Schema": [ { "collection_name": "ChildOf", "collection_type": "edge", "edge_properties": [ { "name": "_key", "type": "str" }, { "name": "_id", "type": "str" }, { "name": "_from", "type": "str" }, { "name": "_to", "type": "str" }, { "name": "_rev", "type": "str" } ], "example_edge": { "_key": "266218884025", "_id": "ChildOf/266218884025", "_from": "Characters/AryaStark", "_to": "Characters/NedStark", "_rev": "_gVPKGSq---" } }, { "collection_name": "Characters", "collection_type": "document", "document_properties": [ { "name": "_key", "type": "str" }, { "name": "_id", "type": "str" }, { "name": "_rev", "type": "str" }, { "name": "name", "type": "str" }, { "name": "surname", "type": "str" }, { "name": "alive", "type": "bool" }, { "name": "age", "type": "int" }, { "name": "gender", "type": "str" } ], "example_document": { "_key": "NedStark", "_id": "Characters/NedStark", "_rev": "_gVPKGPi---", "name": "Ned", "surname": "Stark", "alive": true, "age": 41, "gender": "male" } } ]} ``` ## Querying the ArangoDB database[​](#querying-the-arangodb-database "Direct link to Querying the ArangoDB database") We can now use the `ArangoDB Graph` QA Chain to inquire about our data ``` import osos.environ["OPENAI_API_KEY"] = "your-key-here" ``` ``` from langchain.chains import ArangoGraphQAChainfrom langchain_openai import ChatOpenAIchain = ArangoGraphQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True) ``` ``` chain.run("Is Ned Stark alive?") ``` ``` > Entering new ArangoGraphQAChain chain...AQL Query (1):WITH CharactersFOR character IN CharactersFILTER character.name == "Ned" AND character.surname == "Stark"RETURN character.aliveAQL Result:[True]> Finished chain. ``` ``` 'Yes, Ned Stark is alive.' ``` ``` chain.run("How old is Arya Stark?") ``` ``` > Entering new ArangoGraphQAChain chain...AQL Query (1):WITH CharactersFOR character IN CharactersFILTER character.name == "Arya" && character.surname == "Stark"RETURN character.ageAQL Result:[11]> Finished chain. ``` ``` 'Arya Stark is 11 years old.' ``` ``` chain.run("Are Arya Stark and Ned Stark related?") ``` ``` > Entering new ArangoGraphQAChain chain...AQL Query (1):WITH Characters, ChildOfFOR v, e, p IN 1..1 OUTBOUND 'Characters/AryaStark' ChildOf FILTER p.vertices[-1]._key == 'NedStark' RETURN pAQL Result:[{'vertices': [{'_key': 'AryaStark', '_id': 'Characters/AryaStark', '_rev': '_gVPKGPi--B', 'name': 'Arya', 'surname': 'Stark', 'alive': True, 'age': 11, 'gender': 'female'}, {'_key': 'NedStark', '_id': 'Characters/NedStark', '_rev': '_gVPKGPi---', 'name': 'Ned', 'surname': 'Stark', 'alive': True, 'age': 41, 'gender': 'male'}], 'edges': [{'_key': '266218884025', '_id': 'ChildOf/266218884025', '_from': 'Characters/AryaStark', '_to': 'Characters/NedStark', '_rev': '_gVPKGSq---'}], 'weights': [0, 1]}]> Finished chain. ``` ``` 'Yes, Arya Stark and Ned Stark are related. According to the information retrieved from the database, there is a relationship between them. Arya Stark is the child of Ned Stark.' ``` ``` chain.run("Does Arya Stark have a dead parent?") ``` ``` > Entering new ArangoGraphQAChain chain...AQL Query (1):WITH Characters, ChildOfFOR v, e IN 1..1 OUTBOUND 'Characters/AryaStark' ChildOfFILTER v.alive == falseRETURN eAQL Result:[{'_key': '266218884027', '_id': 'ChildOf/266218884027', '_from': 'Characters/AryaStark', '_to': 'Characters/CatelynStark', '_rev': '_gVPKGSu---'}]> Finished chain. ``` ``` 'Yes, Arya Stark has a dead parent. The parent is Catelyn Stark.' ``` ## Chain modifiers[​](#chain-modifiers "Direct link to Chain modifiers") You can alter the values of the following `ArangoDBGraphQAChain` class variables to modify the behaviour of your chain results ``` # Specify the maximum number of AQL Query Results to returnchain.top_k = 10# Specify whether or not to return the AQL Query in the output dictionarychain.return_aql_query = True# Specify whether or not to return the AQL JSON Result in the output dictionarychain.return_aql_result = True# Specify the maximum amount of AQL Generation attempts that should be madechain.max_aql_generation_attempts = 5# Specify a set of AQL Query Examples, which are passed to# the AQL Generation Prompt Template to promote few-shot-learning.# Defaults to an empty string.chain.aql_examples = """# Is Ned Stark alive?RETURN DOCUMENT('Characters/NedStark').alive# Is Arya Stark the child of Ned Stark?FOR e IN ChildOf FILTER e._from == "Characters/AryaStark" AND e._to == "Characters/NedStark" RETURN e""" ``` ``` chain.run("Is Ned Stark alive?")# chain("Is Ned Stark alive?") # Returns a dictionary with the AQL Query & AQL Result ``` ``` > Entering new ArangoGraphQAChain chain...AQL Query (1):RETURN DOCUMENT('Characters/NedStark').aliveAQL Result:[True]> Finished chain. ``` ``` 'Yes, according to the information in the database, Ned Stark is alive.' ``` ``` chain.run("Is Bran Stark the child of Ned Stark?") ``` ``` > Entering new ArangoGraphQAChain chain...AQL Query (1):FOR e IN ChildOf FILTER e._from == "Characters/BranStark" AND e._to == "Characters/NedStark" RETURN eAQL Result:[{'_key': '266218884026', '_id': 'ChildOf/266218884026', '_from': 'Characters/BranStark', '_to': 'Characters/NedStark', '_rev': '_gVPKGSq--_'}]> Finished chain. ``` ``` 'Yes, according to the information in the ArangoDB database, Bran Stark is indeed the child of Ned Stark.' ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:56.261Z", "loadedUrl": "https://python.langchain.com/docs/integrations/graphs/arangodb/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/graphs/arangodb/", "description": "Open In Colab", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3487", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"arangodb\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:55 GMT", "etag": "W/\"ec9cb53c65cd07101df6a9bd03d77086\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::h7kk6-1713753595822-93b3b32bac39" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/graphs/arangodb/", "property": "og:url" }, { "content": "ArangoDB | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Open In Colab", "property": "og:description" } ], "title": "ArangoDB | 🦜️🔗 LangChain" }
ArangoDB Open In Colab ArangoDB is a scalable graph database system to drive value from connected data, faster. Native graphs, an integrated search engine, and JSON support, via a single query language. ArangoDB runs on-prem or in the cloud. This notebook shows how to use LLMs to provide a natural language interface to an ArangoDB database. Setting up​ You can get a local ArangoDB instance running via the ArangoDB Docker image: docker run -p 8529:8529 -e ARANGO_ROOT_PASSWORD= arangodb/arangodb An alternative is to use the ArangoDB Cloud Connector package to get a temporary cloud instance running: %%capture %pip install --upgrade --quiet python-arango # The ArangoDB Python Driver %pip install --upgrade --quiet adb-cloud-connector # The ArangoDB Cloud Instance provisioner %pip install --upgrade --quiet langchain-openai %pip install --upgrade --quiet langchain # Instantiate ArangoDB Database import json from adb_cloud_connector import get_temp_credentials from arango import ArangoClient con = get_temp_credentials() db = ArangoClient(hosts=con["url"]).db( con["dbName"], con["username"], con["password"], verify=True ) print(json.dumps(con, indent=2)) Log: requesting new credentials... Succcess: new credentials acquired { "dbName": "TUT3sp29s3pjf1io0h4cfdsq", "username": "TUTo6nkwgzkizej3kysgdyeo8", "password": "TUT9vx0qjqt42i9bq8uik4v9", "hostname": "tutorials.arangodb.cloud", "port": 8529, "url": "https://tutorials.arangodb.cloud:8529" } # Instantiate the ArangoDB-LangChain Graph from langchain_community.graphs import ArangoGraph graph = ArangoGraph(db) Populating database​ We will rely on the Python Driver to import our GameOfThrones data into our database. if db.has_graph("GameOfThrones"): db.delete_graph("GameOfThrones", drop_collections=True) db.create_graph( "GameOfThrones", edge_definitions=[ { "edge_collection": "ChildOf", "from_vertex_collections": ["Characters"], "to_vertex_collections": ["Characters"], }, ], ) documents = [ { "_key": "NedStark", "name": "Ned", "surname": "Stark", "alive": True, "age": 41, "gender": "male", }, { "_key": "CatelynStark", "name": "Catelyn", "surname": "Stark", "alive": False, "age": 40, "gender": "female", }, { "_key": "AryaStark", "name": "Arya", "surname": "Stark", "alive": True, "age": 11, "gender": "female", }, { "_key": "BranStark", "name": "Bran", "surname": "Stark", "alive": True, "age": 10, "gender": "male", }, ] edges = [ {"_to": "Characters/NedStark", "_from": "Characters/AryaStark"}, {"_to": "Characters/NedStark", "_from": "Characters/BranStark"}, {"_to": "Characters/CatelynStark", "_from": "Characters/AryaStark"}, {"_to": "Characters/CatelynStark", "_from": "Characters/BranStark"}, ] db.collection("Characters").import_bulk(documents) db.collection("ChildOf").import_bulk(edges) {'error': False, 'created': 4, 'errors': 0, 'empty': 0, 'updated': 0, 'ignored': 0, 'details': []} Getting and setting the ArangoDB schema​ An initial ArangoDB Schema is generated upon instantiating the ArangoDBGraph object. Below are the schema’s getter & setter methods should you be interested in viewing or modifying the schema: # The schema should be empty here, # since `graph` was initialized prior to ArangoDB Data ingestion (see above). import json print(json.dumps(graph.schema, indent=4)) { "Graph Schema": [], "Collection Schema": [] } # We can now view the generated schema import json print(json.dumps(graph.schema, indent=4)) { "Graph Schema": [ { "graph_name": "GameOfThrones", "edge_definitions": [ { "edge_collection": "ChildOf", "from_vertex_collections": [ "Characters" ], "to_vertex_collections": [ "Characters" ] } ] } ], "Collection Schema": [ { "collection_name": "ChildOf", "collection_type": "edge", "edge_properties": [ { "name": "_key", "type": "str" }, { "name": "_id", "type": "str" }, { "name": "_from", "type": "str" }, { "name": "_to", "type": "str" }, { "name": "_rev", "type": "str" } ], "example_edge": { "_key": "266218884025", "_id": "ChildOf/266218884025", "_from": "Characters/AryaStark", "_to": "Characters/NedStark", "_rev": "_gVPKGSq---" } }, { "collection_name": "Characters", "collection_type": "document", "document_properties": [ { "name": "_key", "type": "str" }, { "name": "_id", "type": "str" }, { "name": "_rev", "type": "str" }, { "name": "name", "type": "str" }, { "name": "surname", "type": "str" }, { "name": "alive", "type": "bool" }, { "name": "age", "type": "int" }, { "name": "gender", "type": "str" } ], "example_document": { "_key": "NedStark", "_id": "Characters/NedStark", "_rev": "_gVPKGPi---", "name": "Ned", "surname": "Stark", "alive": true, "age": 41, "gender": "male" } } ] } Querying the ArangoDB database​ We can now use the ArangoDB Graph QA Chain to inquire about our data import os os.environ["OPENAI_API_KEY"] = "your-key-here" from langchain.chains import ArangoGraphQAChain from langchain_openai import ChatOpenAI chain = ArangoGraphQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True ) chain.run("Is Ned Stark alive?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters FOR character IN Characters FILTER character.name == "Ned" AND character.surname == "Stark" RETURN character.alive AQL Result: [True] > Finished chain. 'Yes, Ned Stark is alive.' chain.run("How old is Arya Stark?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters FOR character IN Characters FILTER character.name == "Arya" && character.surname == "Stark" RETURN character.age AQL Result: [11] > Finished chain. 'Arya Stark is 11 years old.' chain.run("Are Arya Stark and Ned Stark related?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters, ChildOf FOR v, e, p IN 1..1 OUTBOUND 'Characters/AryaStark' ChildOf FILTER p.vertices[-1]._key == 'NedStark' RETURN p AQL Result: [{'vertices': [{'_key': 'AryaStark', '_id': 'Characters/AryaStark', '_rev': '_gVPKGPi--B', 'name': 'Arya', 'surname': 'Stark', 'alive': True, 'age': 11, 'gender': 'female'}, {'_key': 'NedStark', '_id': 'Characters/NedStark', '_rev': '_gVPKGPi---', 'name': 'Ned', 'surname': 'Stark', 'alive': True, 'age': 41, 'gender': 'male'}], 'edges': [{'_key': '266218884025', '_id': 'ChildOf/266218884025', '_from': 'Characters/AryaStark', '_to': 'Characters/NedStark', '_rev': '_gVPKGSq---'}], 'weights': [0, 1]}] > Finished chain. 'Yes, Arya Stark and Ned Stark are related. According to the information retrieved from the database, there is a relationship between them. Arya Stark is the child of Ned Stark.' chain.run("Does Arya Stark have a dead parent?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters, ChildOf FOR v, e IN 1..1 OUTBOUND 'Characters/AryaStark' ChildOf FILTER v.alive == false RETURN e AQL Result: [{'_key': '266218884027', '_id': 'ChildOf/266218884027', '_from': 'Characters/AryaStark', '_to': 'Characters/CatelynStark', '_rev': '_gVPKGSu---'}] > Finished chain. 'Yes, Arya Stark has a dead parent. The parent is Catelyn Stark.' Chain modifiers​ You can alter the values of the following ArangoDBGraphQAChain class variables to modify the behaviour of your chain results # Specify the maximum number of AQL Query Results to return chain.top_k = 10 # Specify whether or not to return the AQL Query in the output dictionary chain.return_aql_query = True # Specify whether or not to return the AQL JSON Result in the output dictionary chain.return_aql_result = True # Specify the maximum amount of AQL Generation attempts that should be made chain.max_aql_generation_attempts = 5 # Specify a set of AQL Query Examples, which are passed to # the AQL Generation Prompt Template to promote few-shot-learning. # Defaults to an empty string. chain.aql_examples = """ # Is Ned Stark alive? RETURN DOCUMENT('Characters/NedStark').alive # Is Arya Stark the child of Ned Stark? FOR e IN ChildOf FILTER e._from == "Characters/AryaStark" AND e._to == "Characters/NedStark" RETURN e """ chain.run("Is Ned Stark alive?") # chain("Is Ned Stark alive?") # Returns a dictionary with the AQL Query & AQL Result > Entering new ArangoGraphQAChain chain... AQL Query (1): RETURN DOCUMENT('Characters/NedStark').alive AQL Result: [True] > Finished chain. 'Yes, according to the information in the database, Ned Stark is alive.' chain.run("Is Bran Stark the child of Ned Stark?") > Entering new ArangoGraphQAChain chain... AQL Query (1): FOR e IN ChildOf FILTER e._from == "Characters/BranStark" AND e._to == "Characters/NedStark" RETURN e AQL Result: [{'_key': '266218884026', '_id': 'ChildOf/266218884026', '_from': 'Characters/BranStark', '_to': 'Characters/NedStark', '_rev': '_gVPKGSq--_'}] > Finished chain. 'Yes, according to the information in the ArangoDB database, Bran Stark is indeed the child of Ned Stark.'
https://python.langchain.com/docs/integrations/graphs/azure_cosmosdb_gremlin/
> [Azure Cosmos DB for Apache Gremlin](https://learn.microsoft.com/en-us/azure/cosmos-db/gremlin/introduction) is a graph database service that can be used to store massive graphs with billions of vertices and edges. You can query the graphs with millisecond latency and evolve the graph structure easily. > > [Gremlin](https://en.wikipedia.org/wiki/Gremlin_(query_language)) is a graph traversal language and virtual machine developed by `Apache TinkerPop` of the `Apache Software Foundation`. This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the `Gremlin` query language. ## Setting up[​](#setting-up "Direct link to Setting up") Install a library: ``` !pip3 install gremlinpython ``` You will need an Azure CosmosDB Graph database instance. One option is to create a [free CosmosDB Graph database instance in Azure](https://learn.microsoft.com/en-us/azure/cosmos-db/free-tier). When you create your Cosmos DB account and Graph, use `/type` as a partition key. ``` cosmosdb_name = "mycosmosdb"cosmosdb_db_id = "graphtesting"cosmosdb_db_graph_id = "mygraph"cosmosdb_access_Key = "longstring==" ``` ``` import nest_asynciofrom langchain.chains.graph_qa.gremlin import GremlinQAChainfrom langchain.schema import Documentfrom langchain_community.graphs import GremlinGraphfrom langchain_community.graphs.graph_document import GraphDocument, Node, Relationshipfrom langchain_openai import AzureChatOpenAI ``` ``` graph = GremlinGraph( url=f"=wss://{cosmosdb_name}.gremlin.cosmos.azure.com:443/", username=f"/dbs/{cosmosdb_db_id}/colls/{cosmosdb_db_graph_id}", password=cosmosdb_access_Key,) ``` ## Seeding the database[​](#seeding-the-database "Direct link to Seeding the database") Assuming your database is empty, you can populate it using the GraphDocuments For Gremlin, always add property called ‘label’ for each Node. If no label is set, Node.type is used as a label. For cosmos using natural id’s make sense, as they are visible in the graph explorer. ``` source_doc = Document( page_content="Matrix is a movie where Keanu Reeves, Laurence Fishburne and Carrie-Anne Moss acted.")movie = Node(id="The Matrix", properties={"label": "movie", "title": "The Matrix"})actor1 = Node(id="Keanu Reeves", properties={"label": "actor", "name": "Keanu Reeves"})actor2 = Node( id="Laurence Fishburne", properties={"label": "actor", "name": "Laurence Fishburne"})actor3 = Node( id="Carrie-Anne Moss", properties={"label": "actor", "name": "Carrie-Anne Moss"})rel1 = Relationship( id=5, type="ActedIn", source=actor1, target=movie, properties={"label": "ActedIn"})rel2 = Relationship( id=6, type="ActedIn", source=actor2, target=movie, properties={"label": "ActedIn"})rel3 = Relationship( id=7, type="ActedIn", source=actor3, target=movie, properties={"label": "ActedIn"})rel4 = Relationship( id=8, type="Starring", source=movie, target=actor1, properties={"label": "Strarring"},)rel5 = Relationship( id=9, type="Starring", source=movie, target=actor2, properties={"label": "Strarring"},)rel6 = Relationship( id=10, type="Straring", source=movie, target=actor3, properties={"label": "Strarring"},)graph_doc = GraphDocument( nodes=[movie, actor1, actor2, actor3], relationships=[rel1, rel2, rel3, rel4, rel5, rel6], source=source_doc,) ``` ``` # The underlying python-gremlin has a problem when running in notebook# The following line is a workaround to fix the problemnest_asyncio.apply()# Add the document to the CosmosDB graph.graph.add_graph_documents([graph_doc]) ``` ## Refresh graph schema information[​](#refresh-graph-schema-information "Direct link to Refresh graph schema information") If the schema of database changes (after updates), you can refresh the schema information. ## Querying the graph[​](#querying-the-graph "Direct link to Querying the graph") We can now use the gremlin QA chain to ask question of the graph ``` chain = GremlinQAChain.from_llm( AzureChatOpenAI( temperature=0, azure_deployment="gpt-4-turbo", ), graph=graph, verbose=True,) ``` ``` chain.invoke("Who played in The Matrix?") ``` ``` chain.run("How many people played in The Matrix?") ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:56.912Z", "loadedUrl": "https://python.langchain.com/docs/integrations/graphs/azure_cosmosdb_gremlin/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/graphs/azure_cosmosdb_gremlin/", "description": "[Azure Cosmos DB for Apache", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3878", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"azure_cosmosdb_gremlin\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:55 GMT", "etag": "W/\"19042d4fc7bb8c7b66dfa9f6b94e3757\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::tjlr2-1713753595909-baf0d9d52d8c" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/graphs/azure_cosmosdb_gremlin/", "property": "og:url" }, { "content": "Azure Cosmos DB for Apache Gremlin | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "[Azure Cosmos DB for Apache", "property": "og:description" } ], "title": "Azure Cosmos DB for Apache Gremlin | 🦜️🔗 LangChain" }
Azure Cosmos DB for Apache Gremlin is a graph database service that can be used to store massive graphs with billions of vertices and edges. You can query the graphs with millisecond latency and evolve the graph structure easily. Gremlin is a graph traversal language and virtual machine developed by Apache TinkerPop of the Apache Software Foundation. This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Gremlin query language. Setting up​ Install a library: !pip3 install gremlinpython You will need an Azure CosmosDB Graph database instance. One option is to create a free CosmosDB Graph database instance in Azure. When you create your Cosmos DB account and Graph, use /type as a partition key. cosmosdb_name = "mycosmosdb" cosmosdb_db_id = "graphtesting" cosmosdb_db_graph_id = "mygraph" cosmosdb_access_Key = "longstring==" import nest_asyncio from langchain.chains.graph_qa.gremlin import GremlinQAChain from langchain.schema import Document from langchain_community.graphs import GremlinGraph from langchain_community.graphs.graph_document import GraphDocument, Node, Relationship from langchain_openai import AzureChatOpenAI graph = GremlinGraph( url=f"=wss://{cosmosdb_name}.gremlin.cosmos.azure.com:443/", username=f"/dbs/{cosmosdb_db_id}/colls/{cosmosdb_db_graph_id}", password=cosmosdb_access_Key, ) Seeding the database​ Assuming your database is empty, you can populate it using the GraphDocuments For Gremlin, always add property called ‘label’ for each Node. If no label is set, Node.type is used as a label. For cosmos using natural id’s make sense, as they are visible in the graph explorer. source_doc = Document( page_content="Matrix is a movie where Keanu Reeves, Laurence Fishburne and Carrie-Anne Moss acted." ) movie = Node(id="The Matrix", properties={"label": "movie", "title": "The Matrix"}) actor1 = Node(id="Keanu Reeves", properties={"label": "actor", "name": "Keanu Reeves"}) actor2 = Node( id="Laurence Fishburne", properties={"label": "actor", "name": "Laurence Fishburne"} ) actor3 = Node( id="Carrie-Anne Moss", properties={"label": "actor", "name": "Carrie-Anne Moss"} ) rel1 = Relationship( id=5, type="ActedIn", source=actor1, target=movie, properties={"label": "ActedIn"} ) rel2 = Relationship( id=6, type="ActedIn", source=actor2, target=movie, properties={"label": "ActedIn"} ) rel3 = Relationship( id=7, type="ActedIn", source=actor3, target=movie, properties={"label": "ActedIn"} ) rel4 = Relationship( id=8, type="Starring", source=movie, target=actor1, properties={"label": "Strarring"}, ) rel5 = Relationship( id=9, type="Starring", source=movie, target=actor2, properties={"label": "Strarring"}, ) rel6 = Relationship( id=10, type="Straring", source=movie, target=actor3, properties={"label": "Strarring"}, ) graph_doc = GraphDocument( nodes=[movie, actor1, actor2, actor3], relationships=[rel1, rel2, rel3, rel4, rel5, rel6], source=source_doc, ) # The underlying python-gremlin has a problem when running in notebook # The following line is a workaround to fix the problem nest_asyncio.apply() # Add the document to the CosmosDB graph. graph.add_graph_documents([graph_doc]) Refresh graph schema information​ If the schema of database changes (after updates), you can refresh the schema information. Querying the graph​ We can now use the gremlin QA chain to ask question of the graph chain = GremlinQAChain.from_llm( AzureChatOpenAI( temperature=0, azure_deployment="gpt-4-turbo", ), graph=graph, verbose=True, ) chain.invoke("Who played in The Matrix?") chain.run("How many people played in The Matrix?")
https://python.langchain.com/docs/integrations/graphs/diffbot/
## Diffbot [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/graph/diffbot_graphtransformer.ipynb) Open In Colab > [Diffbot](https://docs.diffbot.com/docs/getting-started-with-diffbot) is a suite of products that make it easy to integrate and research data on the web. > > [The Diffbot Knowledge Graph](https://docs.diffbot.com/docs/getting-started-with-diffbot-knowledge-graph) is a self-updating graph database of the public web. ## Use case[​](#use-case "Direct link to Use case") Text data often contain rich relationships and insights used for various analytics, recommendation engines, or knowledge management applications. `Diffbot's NLP API` allows for the extraction of entities, relationships, and semantic meaning from unstructured text data. By coupling `Diffbot's NLP API` with `Neo4j`, a graph database, you can create powerful, dynamic graph structures based on the information extracted from text. These graph structures are fully queryable and can be integrated into various applications. This combination allows for use cases such as: * Building knowledge graphs from textual documents, websites, or social media feeds. * Generating recommendations based on semantic relationships in the data. * Creating advanced search features that understand the relationships between entities. * Building analytics dashboards that allow users to explore the hidden relationships in data. ## Overview[​](#overview "Direct link to Overview") LangChain provides tools to interact with Graph Databases: 1. `Construct knowledge graphs from text` using graph transformer and store integrations 2. `Query a graph database` using chains for query creation and execution 3. `Interact with a graph database` using agents for robust and flexible querying ## Setting up[​](#setting-up "Direct link to Setting up") First, get required packages and set environment variables: ``` %pip install --upgrade --quiet langchain langchain-experimental langchain-openai neo4j wikipedia ``` ### Diffbot NLP Service[​](#diffbot-nlp-service "Direct link to Diffbot NLP Service") `Diffbot's NLP` service is a tool for extracting entities, relationships, and semantic context from unstructured text data. This extracted information can be used to construct a knowledge graph. To use their service, you’ll need to obtain an API key from [Diffbot](https://www.diffbot.com/products/natural-language/). ``` from langchain_experimental.graph_transformers.diffbot import DiffbotGraphTransformerdiffbot_api_key = "DIFFBOT_API_KEY"diffbot_nlp = DiffbotGraphTransformer(diffbot_api_key=diffbot_api_key) ``` This code fetches Wikipedia articles about “Warren Buffett” and then uses `DiffbotGraphTransformer` to extract entities and relationships. The `DiffbotGraphTransformer` outputs a structured data `GraphDocument`, which can be used to populate a graph database. Note that text chunking is avoided due to Diffbot’s [character limit per API request](https://docs.diffbot.com/reference/introduction-to-natural-language-api). ``` from langchain_community.document_loaders import WikipediaLoaderquery = "Warren Buffett"raw_documents = WikipediaLoader(query=query).load()graph_documents = diffbot_nlp.convert_to_graph_documents(raw_documents) ``` ## Loading the data into a knowledge graph[​](#loading-the-data-into-a-knowledge-graph "Direct link to Loading the data into a knowledge graph") You will need to have a running Neo4j instance. One option is to create a [free Neo4j database instance in their Aura cloud service](https://neo4j.com/cloud/platform/aura-graph-database/). You can also run the database locally using the [Neo4j Desktop application](https://neo4j.com/download/), or running a docker container. You can run a local docker container by running the executing the following script: ``` docker run \ --name neo4j \ -p 7474:7474 -p 7687:7687 \ -d \ -e NEO4J_AUTH=neo4j/pleaseletmein \ -e NEO4J_PLUGINS=\[\"apoc\"\] \ neo4j:latest ``` If you are using the docker container, you need to wait a couple of second for the database to start. ``` from langchain_community.graphs import Neo4jGraphurl = "bolt://localhost:7687"username = "neo4j"password = "pleaseletmein"graph = Neo4jGraph(url=url, username=username, password=password) ``` The `GraphDocuments` can be loaded into a knowledge graph using the `add_graph_documents` method. ``` graph.add_graph_documents(graph_documents) ``` ## Refresh graph schema information[​](#refresh-graph-schema-information "Direct link to Refresh graph schema information") If the schema of database changes, you can refresh the schema information needed to generate Cypher statements ## Querying the graph[​](#querying-the-graph "Direct link to Querying the graph") We can now use the graph cypher QA chain to ask question of the graph. It is advisable to use **gpt-4** to construct Cypher queries to get the best experience. ``` from langchain.chains import GraphCypherQAChainfrom langchain_openai import ChatOpenAIchain = GraphCypherQAChain.from_llm( cypher_llm=ChatOpenAI(temperature=0, model_name="gpt-4"), qa_llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo"), graph=graph, verbose=True,) ``` ``` chain.run("Which university did Warren Buffett attend?") ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (p:Person {name: "Warren Buffett"})-[:EDUCATED_AT]->(o:Organization)RETURN o.nameFull Context:[{'o.name': 'New York Institute of Finance'}, {'o.name': 'Alice Deal Junior High School'}, {'o.name': 'Woodrow Wilson High School'}, {'o.name': 'University of Nebraska'}]> Finished chain. ``` ``` 'Warren Buffett attended the University of Nebraska.' ``` ``` chain.run("Who is or was working at Berkshire Hathaway?") ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (p:Person)-[r:EMPLOYEE_OR_MEMBER_OF]->(o:Organization) WHERE o.name = 'Berkshire Hathaway' RETURN p.nameFull Context:[{'p.name': 'Charlie Munger'}, {'p.name': 'Oliver Chace'}, {'p.name': 'Howard Buffett'}, {'p.name': 'Howard'}, {'p.name': 'Susan Buffett'}, {'p.name': 'Warren Buffett'}]> Finished chain. ``` ``` 'Charlie Munger, Oliver Chace, Howard Buffett, Susan Buffett, and Warren Buffett are or were working at Berkshire Hathaway.' ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:57.663Z", "loadedUrl": "https://python.langchain.com/docs/integrations/graphs/diffbot/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/graphs/diffbot/", "description": "Open In Colab", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"diffbot\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:57 GMT", "etag": "W/\"8dd38a2fe310e2393ea9edbcaf9534ab\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::5h84l-1713753597592-01a38ebf2c02" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/graphs/diffbot/", "property": "og:url" }, { "content": "Diffbot | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Open In Colab", "property": "og:description" } ], "title": "Diffbot | 🦜️🔗 LangChain" }
Diffbot Open In Colab Diffbot is a suite of products that make it easy to integrate and research data on the web. The Diffbot Knowledge Graph is a self-updating graph database of the public web. Use case​ Text data often contain rich relationships and insights used for various analytics, recommendation engines, or knowledge management applications. Diffbot's NLP API allows for the extraction of entities, relationships, and semantic meaning from unstructured text data. By coupling Diffbot's NLP API with Neo4j, a graph database, you can create powerful, dynamic graph structures based on the information extracted from text. These graph structures are fully queryable and can be integrated into various applications. This combination allows for use cases such as: Building knowledge graphs from textual documents, websites, or social media feeds. Generating recommendations based on semantic relationships in the data. Creating advanced search features that understand the relationships between entities. Building analytics dashboards that allow users to explore the hidden relationships in data. Overview​ LangChain provides tools to interact with Graph Databases: Construct knowledge graphs from text using graph transformer and store integrations Query a graph database using chains for query creation and execution Interact with a graph database using agents for robust and flexible querying Setting up​ First, get required packages and set environment variables: %pip install --upgrade --quiet langchain langchain-experimental langchain-openai neo4j wikipedia Diffbot NLP Service​ Diffbot's NLP service is a tool for extracting entities, relationships, and semantic context from unstructured text data. This extracted information can be used to construct a knowledge graph. To use their service, you’ll need to obtain an API key from Diffbot. from langchain_experimental.graph_transformers.diffbot import DiffbotGraphTransformer diffbot_api_key = "DIFFBOT_API_KEY" diffbot_nlp = DiffbotGraphTransformer(diffbot_api_key=diffbot_api_key) This code fetches Wikipedia articles about “Warren Buffett” and then uses DiffbotGraphTransformer to extract entities and relationships. The DiffbotGraphTransformer outputs a structured data GraphDocument, which can be used to populate a graph database. Note that text chunking is avoided due to Diffbot’s character limit per API request. from langchain_community.document_loaders import WikipediaLoader query = "Warren Buffett" raw_documents = WikipediaLoader(query=query).load() graph_documents = diffbot_nlp.convert_to_graph_documents(raw_documents) Loading the data into a knowledge graph​ You will need to have a running Neo4j instance. One option is to create a free Neo4j database instance in their Aura cloud service. You can also run the database locally using the Neo4j Desktop application, or running a docker container. You can run a local docker container by running the executing the following script: docker run \ --name neo4j \ -p 7474:7474 -p 7687:7687 \ -d \ -e NEO4J_AUTH=neo4j/pleaseletmein \ -e NEO4J_PLUGINS=\[\"apoc\"\] \ neo4j:latest If you are using the docker container, you need to wait a couple of second for the database to start. from langchain_community.graphs import Neo4jGraph url = "bolt://localhost:7687" username = "neo4j" password = "pleaseletmein" graph = Neo4jGraph(url=url, username=username, password=password) The GraphDocuments can be loaded into a knowledge graph using the add_graph_documents method. graph.add_graph_documents(graph_documents) Refresh graph schema information​ If the schema of database changes, you can refresh the schema information needed to generate Cypher statements Querying the graph​ We can now use the graph cypher QA chain to ask question of the graph. It is advisable to use gpt-4 to construct Cypher queries to get the best experience. from langchain.chains import GraphCypherQAChain from langchain_openai import ChatOpenAI chain = GraphCypherQAChain.from_llm( cypher_llm=ChatOpenAI(temperature=0, model_name="gpt-4"), qa_llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo"), graph=graph, verbose=True, ) chain.run("Which university did Warren Buffett attend?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (p:Person {name: "Warren Buffett"})-[:EDUCATED_AT]->(o:Organization) RETURN o.name Full Context: [{'o.name': 'New York Institute of Finance'}, {'o.name': 'Alice Deal Junior High School'}, {'o.name': 'Woodrow Wilson High School'}, {'o.name': 'University of Nebraska'}] > Finished chain. 'Warren Buffett attended the University of Nebraska.' chain.run("Who is or was working at Berkshire Hathaway?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (p:Person)-[r:EMPLOYEE_OR_MEMBER_OF]->(o:Organization) WHERE o.name = 'Berkshire Hathaway' RETURN p.name Full Context: [{'p.name': 'Charlie Munger'}, {'p.name': 'Oliver Chace'}, {'p.name': 'Howard Buffett'}, {'p.name': 'Howard'}, {'p.name': 'Susan Buffett'}, {'p.name': 'Warren Buffett'}] > Finished chain. 'Charlie Munger, Oliver Chace, Howard Buffett, Susan Buffett, and Warren Buffett are or were working at Berkshire Hathaway.'
https://python.langchain.com/docs/integrations/graphs/rdflib_sparql/
Graph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends `Semantic Web Technologies`, cp. [Semantic Web](https://www.w3.org/standards/semanticweb/). [SPARQL](https://www.w3.org/TR/sparql11-query/) serves as a query language analogously to `SQL` or `Cypher` for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating `SPARQL`. **Disclaimer:** To date, `SPARQL` query generation via LLMs is still a bit unstable. Be especially careful with `UPDATE` queries, which alter the graph. There are several sources you can run queries against, including files on the web, files you have available locally, SPARQL endpoints, e.g., [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page), and [triple stores](https://www.w3.org/wiki/LargeTripleStores). Note that providing a `local_file` is necessary for storing changes locally if the source is read-only. If the schema of the database changes, you can refresh the schema information needed to generate SPARQL queries. ``` In the following, each IRI is followed by the local name and optionally its description in parentheses. The RDF graph supports the following node types:<http://xmlns.com/foaf/0.1/PersonalProfileDocument> (PersonalProfileDocument, None), <http://www.w3.org/ns/auth/cert#RSAPublicKey> (RSAPublicKey, None), <http://www.w3.org/2000/10/swap/pim/contact#Male> (Male, None), <http://xmlns.com/foaf/0.1/Person> (Person, None), <http://www.w3.org/2006/vcard/ns#Work> (Work, None)The RDF graph supports the following relationships:<http://www.w3.org/2000/01/rdf-schema#seeAlso> (seeAlso, None), <http://purl.org/dc/elements/1.1/title> (title, None), <http://xmlns.com/foaf/0.1/mbox_sha1sum> (mbox_sha1sum, None), <http://xmlns.com/foaf/0.1/maker> (maker, None), <http://www.w3.org/ns/solid/terms#oidcIssuer> (oidcIssuer, None), <http://www.w3.org/2000/10/swap/pim/contact#publicHomePage> (publicHomePage, None), <http://xmlns.com/foaf/0.1/openid> (openid, None), <http://www.w3.org/ns/pim/space#storage> (storage, None), <http://xmlns.com/foaf/0.1/name> (name, None), <http://www.w3.org/2000/10/swap/pim/contact#country> (country, None), <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> (type, None), <http://www.w3.org/ns/solid/terms#profileHighlightColor> (profileHighlightColor, None), <http://www.w3.org/ns/pim/space#preferencesFile> (preferencesFile, None), <http://www.w3.org/2000/01/rdf-schema#label> (label, None), <http://www.w3.org/ns/auth/cert#modulus> (modulus, None), <http://www.w3.org/2000/10/swap/pim/contact#participant> (participant, None), <http://www.w3.org/2000/10/swap/pim/contact#street2> (street2, None), <http://www.w3.org/2006/vcard/ns#locality> (locality, None), <http://xmlns.com/foaf/0.1/nick> (nick, None), <http://xmlns.com/foaf/0.1/homepage> (homepage, None), <http://creativecommons.org/ns#license> (license, None), <http://xmlns.com/foaf/0.1/givenname> (givenname, None), <http://www.w3.org/2006/vcard/ns#street-address> (street-address, None), <http://www.w3.org/2006/vcard/ns#postal-code> (postal-code, None), <http://www.w3.org/2000/10/swap/pim/contact#street> (street, None), <http://www.w3.org/2003/01/geo/wgs84_pos#lat> (lat, None), <http://xmlns.com/foaf/0.1/primaryTopic> (primaryTopic, None), <http://www.w3.org/2006/vcard/ns#fn> (fn, None), <http://www.w3.org/2003/01/geo/wgs84_pos#location> (location, None), <http://usefulinc.com/ns/doap#developer> (developer, None), <http://www.w3.org/2000/10/swap/pim/contact#city> (city, None), <http://www.w3.org/2006/vcard/ns#region> (region, None), <http://xmlns.com/foaf/0.1/member> (member, None), <http://www.w3.org/2003/01/geo/wgs84_pos#long> (long, None), <http://www.w3.org/2000/10/swap/pim/contact#address> (address, None), <http://xmlns.com/foaf/0.1/family_name> (family_name, None), <http://xmlns.com/foaf/0.1/account> (account, None), <http://xmlns.com/foaf/0.1/workplaceHomepage> (workplaceHomepage, None), <http://purl.org/dc/terms/title> (title, None), <http://www.w3.org/ns/solid/terms#publicTypeIndex> (publicTypeIndex, None), <http://www.w3.org/2000/10/swap/pim/contact#office> (office, None), <http://www.w3.org/2000/10/swap/pim/contact#homePage> (homePage, None), <http://xmlns.com/foaf/0.1/mbox> (mbox, None), <http://www.w3.org/2000/10/swap/pim/contact#preferredURI> (preferredURI, None), <http://www.w3.org/ns/solid/terms#profileBackgroundColor> (profileBackgroundColor, None), <http://schema.org/owns> (owns, None), <http://xmlns.com/foaf/0.1/based_near> (based_near, None), <http://www.w3.org/2006/vcard/ns#hasAddress> (hasAddress, None), <http://xmlns.com/foaf/0.1/img> (img, None), <http://www.w3.org/2000/10/swap/pim/contact#assistant> (assistant, None), <http://xmlns.com/foaf/0.1/title> (title, None), <http://www.w3.org/ns/auth/cert#key> (key, None), <http://www.w3.org/ns/ldp#inbox> (inbox, None), <http://www.w3.org/ns/solid/terms#editableProfile> (editableProfile, None), <http://www.w3.org/2000/10/swap/pim/contact#postalCode> (postalCode, None), <http://xmlns.com/foaf/0.1/weblog> (weblog, None), <http://www.w3.org/ns/auth/cert#exponent> (exponent, None), <http://rdfs.org/sioc/ns#avatar> (avatar, None) ``` Now, you can use the graph SPARQL QA chain to ask questions about the graph. ``` > Entering new GraphSparqlQAChain chain...Identified intent:SELECTGenerated SPARQL:PREFIX foaf: <http://xmlns.com/foaf/0.1/>SELECT ?homepageWHERE { ?person foaf:name "Tim Berners-Lee" . ?person foaf:workplaceHomepage ?homepage .}Full Context:[]> Finished chain. ``` ``` "Tim Berners-Lee's work homepage is http://www.w3.org/People/Berners-Lee/." ``` Analogously, you can update the graph, i.e., insert triples, using natural language. ``` > Entering new GraphSparqlQAChain chain...Identified intent:UPDATEGenerated SPARQL:PREFIX foaf: <http://xmlns.com/foaf/0.1/>INSERT { ?person foaf:workplaceHomepage <http://www.w3.org/foo/bar/> .}WHERE { ?person foaf:name "Timothy Berners-Lee" .}> Finished chain. ``` ``` 'Successfully inserted triples into the graph.' ``` ``` [(rdflib.term.URIRef('https://www.w3.org/'),), (rdflib.term.URIRef('http://www.w3.org/foo/bar/'),)] ``` You can return the SPARQL query step from the Sparql QA Chain using the `return_sparql_query` parameter ``` > Entering new GraphSparqlQAChain chain...Identified intent:SELECTGenerated SPARQL:PREFIX foaf: <http://xmlns.com/foaf/0.1/>SELECT ?workHomepageWHERE { ?person foaf:name "Tim Berners-Lee" . ?person foaf:workplaceHomepage ?workHomepage .}Full Context:[]> Finished chain.SQARQL query: PREFIX foaf: <http://xmlns.com/foaf/0.1/>SELECT ?workHomepageWHERE { ?person foaf:name "Tim Berners-Lee" . ?person foaf:workplaceHomepage ?workHomepage .}Final answer: Tim Berners-Lee's work homepage is http://www.w3.org/People/Berners-Lee/. ``` ``` PREFIX foaf: <http://xmlns.com/foaf/0.1/>SELECT ?workHomepageWHERE { ?person foaf:name "Tim Berners-Lee" . ?person foaf:workplaceHomepage ?workHomepage .} ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:58.796Z", "loadedUrl": "https://python.langchain.com/docs/integrations/graphs/rdflib_sparql/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/graphs/rdflib_sparql/", "description": "RDFLib is a pure Python package for", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3489", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"rdflib_sparql\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:58 GMT", "etag": "W/\"cd7f3536f49e9e0aa7a6d738203414c0\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::qf8zq-1713753598634-7ec054ac64c3" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/graphs/rdflib_sparql/", "property": "og:url" }, { "content": "RDFLib | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "RDFLib is a pure Python package for", "property": "og:description" } ], "title": "RDFLib | 🦜️🔗 LangChain" }
Graph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL. Disclaimer: To date, SPARQL query generation via LLMs is still a bit unstable. Be especially careful with UPDATE queries, which alter the graph. There are several sources you can run queries against, including files on the web, files you have available locally, SPARQL endpoints, e.g., Wikidata, and triple stores. Note that providing a local_file is necessary for storing changes locally if the source is read-only. If the schema of the database changes, you can refresh the schema information needed to generate SPARQL queries. In the following, each IRI is followed by the local name and optionally its description in parentheses. The RDF graph supports the following node types: <http://xmlns.com/foaf/0.1/PersonalProfileDocument> (PersonalProfileDocument, None), <http://www.w3.org/ns/auth/cert#RSAPublicKey> (RSAPublicKey, None), <http://www.w3.org/2000/10/swap/pim/contact#Male> (Male, None), <http://xmlns.com/foaf/0.1/Person> (Person, None), <http://www.w3.org/2006/vcard/ns#Work> (Work, None) The RDF graph supports the following relationships: <http://www.w3.org/2000/01/rdf-schema#seeAlso> (seeAlso, None), <http://purl.org/dc/elements/1.1/title> (title, None), <http://xmlns.com/foaf/0.1/mbox_sha1sum> (mbox_sha1sum, None), <http://xmlns.com/foaf/0.1/maker> (maker, None), <http://www.w3.org/ns/solid/terms#oidcIssuer> (oidcIssuer, None), <http://www.w3.org/2000/10/swap/pim/contact#publicHomePage> (publicHomePage, None), <http://xmlns.com/foaf/0.1/openid> (openid, None), <http://www.w3.org/ns/pim/space#storage> (storage, None), <http://xmlns.com/foaf/0.1/name> (name, None), <http://www.w3.org/2000/10/swap/pim/contact#country> (country, None), <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> (type, None), <http://www.w3.org/ns/solid/terms#profileHighlightColor> (profileHighlightColor, None), <http://www.w3.org/ns/pim/space#preferencesFile> (preferencesFile, None), <http://www.w3.org/2000/01/rdf-schema#label> (label, None), <http://www.w3.org/ns/auth/cert#modulus> (modulus, None), <http://www.w3.org/2000/10/swap/pim/contact#participant> (participant, None), <http://www.w3.org/2000/10/swap/pim/contact#street2> (street2, None), <http://www.w3.org/2006/vcard/ns#locality> (locality, None), <http://xmlns.com/foaf/0.1/nick> (nick, None), <http://xmlns.com/foaf/0.1/homepage> (homepage, None), <http://creativecommons.org/ns#license> (license, None), <http://xmlns.com/foaf/0.1/givenname> (givenname, None), <http://www.w3.org/2006/vcard/ns#street-address> (street-address, None), <http://www.w3.org/2006/vcard/ns#postal-code> (postal-code, None), <http://www.w3.org/2000/10/swap/pim/contact#street> (street, None), <http://www.w3.org/2003/01/geo/wgs84_pos#lat> (lat, None), <http://xmlns.com/foaf/0.1/primaryTopic> (primaryTopic, None), <http://www.w3.org/2006/vcard/ns#fn> (fn, None), <http://www.w3.org/2003/01/geo/wgs84_pos#location> (location, None), <http://usefulinc.com/ns/doap#developer> (developer, None), <http://www.w3.org/2000/10/swap/pim/contact#city> (city, None), <http://www.w3.org/2006/vcard/ns#region> (region, None), <http://xmlns.com/foaf/0.1/member> (member, None), <http://www.w3.org/2003/01/geo/wgs84_pos#long> (long, None), <http://www.w3.org/2000/10/swap/pim/contact#address> (address, None), <http://xmlns.com/foaf/0.1/family_name> (family_name, None), <http://xmlns.com/foaf/0.1/account> (account, None), <http://xmlns.com/foaf/0.1/workplaceHomepage> (workplaceHomepage, None), <http://purl.org/dc/terms/title> (title, None), <http://www.w3.org/ns/solid/terms#publicTypeIndex> (publicTypeIndex, None), <http://www.w3.org/2000/10/swap/pim/contact#office> (office, None), <http://www.w3.org/2000/10/swap/pim/contact#homePage> (homePage, None), <http://xmlns.com/foaf/0.1/mbox> (mbox, None), <http://www.w3.org/2000/10/swap/pim/contact#preferredURI> (preferredURI, None), <http://www.w3.org/ns/solid/terms#profileBackgroundColor> (profileBackgroundColor, None), <http://schema.org/owns> (owns, None), <http://xmlns.com/foaf/0.1/based_near> (based_near, None), <http://www.w3.org/2006/vcard/ns#hasAddress> (hasAddress, None), <http://xmlns.com/foaf/0.1/img> (img, None), <http://www.w3.org/2000/10/swap/pim/contact#assistant> (assistant, None), <http://xmlns.com/foaf/0.1/title> (title, None), <http://www.w3.org/ns/auth/cert#key> (key, None), <http://www.w3.org/ns/ldp#inbox> (inbox, None), <http://www.w3.org/ns/solid/terms#editableProfile> (editableProfile, None), <http://www.w3.org/2000/10/swap/pim/contact#postalCode> (postalCode, None), <http://xmlns.com/foaf/0.1/weblog> (weblog, None), <http://www.w3.org/ns/auth/cert#exponent> (exponent, None), <http://rdfs.org/sioc/ns#avatar> (avatar, None) Now, you can use the graph SPARQL QA chain to ask questions about the graph. > Entering new GraphSparqlQAChain chain... Identified intent: SELECT Generated SPARQL: PREFIX foaf: <http://xmlns.com/foaf/0.1/> SELECT ?homepage WHERE { ?person foaf:name "Tim Berners-Lee" . ?person foaf:workplaceHomepage ?homepage . } Full Context: [] > Finished chain. "Tim Berners-Lee's work homepage is http://www.w3.org/People/Berners-Lee/." Analogously, you can update the graph, i.e., insert triples, using natural language. > Entering new GraphSparqlQAChain chain... Identified intent: UPDATE Generated SPARQL: PREFIX foaf: <http://xmlns.com/foaf/0.1/> INSERT { ?person foaf:workplaceHomepage <http://www.w3.org/foo/bar/> . } WHERE { ?person foaf:name "Timothy Berners-Lee" . } > Finished chain. 'Successfully inserted triples into the graph.' [(rdflib.term.URIRef('https://www.w3.org/'),), (rdflib.term.URIRef('http://www.w3.org/foo/bar/'),)] You can return the SPARQL query step from the Sparql QA Chain using the return_sparql_query parameter > Entering new GraphSparqlQAChain chain... Identified intent: SELECT Generated SPARQL: PREFIX foaf: <http://xmlns.com/foaf/0.1/> SELECT ?workHomepage WHERE { ?person foaf:name "Tim Berners-Lee" . ?person foaf:workplaceHomepage ?workHomepage . } Full Context: [] > Finished chain. SQARQL query: PREFIX foaf: <http://xmlns.com/foaf/0.1/> SELECT ?workHomepage WHERE { ?person foaf:name "Tim Berners-Lee" . ?person foaf:workplaceHomepage ?workHomepage . } Final answer: Tim Berners-Lee's work homepage is http://www.w3.org/People/Berners-Lee/. PREFIX foaf: <http://xmlns.com/foaf/0.1/> SELECT ?workHomepage WHERE { ?person foaf:name "Tim Berners-Lee" . ?person foaf:workplaceHomepage ?workHomepage . }
https://python.langchain.com/docs/integrations/llms/
## LLMs ## Features (natively supported)[​](#features-natively-supported "Direct link to Features (natively supported)") All LLMs implement the Runnable interface, which comes with default implementations of all methods, ie. `ainvoke`, `batch`, `abatch`, `stream`, `astream`. This gives all LLMs basic support for async, streaming and batch, which by default is implemented as below: * _Async_ support defaults to calling the respective sync method in asyncio's default thread pool executor. This lets other async functions in your application make progress while the LLM is being executed, by moving this call to a background thread. * _Streaming_ support defaults to returning an `Iterator` (or `AsyncIterator` in the case of async streaming) of a single value, the final result returned by the underlying LLM provider. This obviously doesn't give you token-by-token streaming, which requires native support from the LLM provider, but ensures your code that expects an iterator of tokens can work for any of our LLM integrations. * _Batch_ support defaults to calling the underlying LLM in parallel for each input by making use of a thread pool executor (in the sync batch case) or `asyncio.gather` (in the async batch case). The concurrency can be controlled with the `max_concurrency` key in `RunnableConfig`. Each LLM integration can optionally provide native implementations for async, streaming or batch, which, for providers that support it, can be more efficient. The table shows, for each integration, which features have been implemented with native support. | Model | Invoke | Async invoke | Stream | Async stream | Batch | Async batch | Tool calling | | --- | --- | --- | --- | --- | --- | --- | --- | | AI21 | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | AlephAlpha | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | AmazonAPIGateway | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Anthropic | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | | Anyscale | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | | Aphrodite | ✅ | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | | Arcee | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Aviary | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | AzureMLOnlineEndpoint | ✅ | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | | AzureOpenAI | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | | BaichuanLLM | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Banana | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Baseten | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Beam | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Bedrock | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | | CTransformers | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | | CTranslate2 | ✅ | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | | CerebriumAI | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | ChatGLM | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Clarifai | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Cohere | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | | Databricks | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | DeepInfra | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | | DeepSparse | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | | EdenAI | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | | Fireworks | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | | ForefrontAI | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Friendli | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | | GPT4All | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | GigaChat | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | | GooglePalm | ✅ | ❌ | ✅ | ❌ | ✅ | ❌ | ❌ | | GooseAI | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | GradientLLM | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ❌ | | HuggingFaceEndpoint | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | | HuggingFaceHub | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | HuggingFacePipeline | ✅ | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | | HuggingFaceTextGenInference | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | | HumanInputLLM | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | IpexLLM | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | JavelinAIGateway | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | | KoboldApiLLM | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Konko | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | | LlamaCpp | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | | Llamafile | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | | MLXPipeline | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | | ManifestWrapper | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Minimax | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Mlflow | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | MlflowAIGateway | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Modal | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | MosaicML | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | NIBittensorLLM | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | NLPCloud | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Nebula | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | OCIGenAI | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | OCIModelDeploymentTGI | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | OCIModelDeploymentVLLM | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | OctoAIEndpoint | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | | Ollama | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | OpaquePrompts | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | | OpenLLM | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | | OpenLM | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | | PaiEasEndpoint | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | | Petals | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | PipelineAI | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Predibase | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | PredictionGuard | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | PromptLayerOpenAI | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | QianfanLLMEndpoint | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | | RWKV | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Replicate | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | | SagemakerEndpoint | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | SelfHostedHuggingFaceLLM | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | SelfHostedPipeline | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | SparkLLM | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | | StochasticAI | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | TextGen | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | TitanTakeoff | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | | TitanTakeoffPro | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | | Together | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | | Tongyi | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | | VLLM | ✅ | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | | VLLMOpenAI | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | | VertexAI | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ | | VertexAIModelGarden | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ❌ | | VolcEngineMaasLLM | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | | WatsonxLLM | ✅ | ❌ | ✅ | ❌ | ✅ | ❌ | ❌ | | WeightOnlyQuantPipeline | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Writer | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Xinference | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | YandexGPT | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | | Yuan2 | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:39:59.433Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/", "description": "Features (natively supported)", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3489", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"llms\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:39:59 GMT", "etag": "W/\"f985e2c154a17da462b7ebdd377f8e72\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::p9qs5-1713753599369-816ce9096341" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/", "property": "og:url" }, { "content": "LLMs | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Features (natively supported)", "property": "og:description" } ], "title": "LLMs | 🦜️🔗 LangChain" }
LLMs Features (natively supported)​ All LLMs implement the Runnable interface, which comes with default implementations of all methods, ie. ainvoke, batch, abatch, stream, astream. This gives all LLMs basic support for async, streaming and batch, which by default is implemented as below: Async support defaults to calling the respective sync method in asyncio's default thread pool executor. This lets other async functions in your application make progress while the LLM is being executed, by moving this call to a background thread. Streaming support defaults to returning an Iterator (or AsyncIterator in the case of async streaming) of a single value, the final result returned by the underlying LLM provider. This obviously doesn't give you token-by-token streaming, which requires native support from the LLM provider, but ensures your code that expects an iterator of tokens can work for any of our LLM integrations. Batch support defaults to calling the underlying LLM in parallel for each input by making use of a thread pool executor (in the sync batch case) or asyncio.gather (in the async batch case). The concurrency can be controlled with the max_concurrency key in RunnableConfig. Each LLM integration can optionally provide native implementations for async, streaming or batch, which, for providers that support it, can be more efficient. The table shows, for each integration, which features have been implemented with native support. ModelInvokeAsync invokeStreamAsync streamBatchAsync batchTool calling AI21 ✅ ❌ ❌ ❌ ❌ ❌ ❌ AlephAlpha ✅ ❌ ❌ ❌ ❌ ❌ ❌ AmazonAPIGateway ✅ ❌ ❌ ❌ ❌ ❌ ❌ Anthropic ✅ ✅ ✅ ✅ ❌ ❌ ❌ Anyscale ✅ ✅ ✅ ✅ ✅ ✅ ❌ Aphrodite ✅ ❌ ❌ ❌ ✅ ❌ ❌ Arcee ✅ ❌ ❌ ❌ ❌ ❌ ❌ Aviary ✅ ❌ ❌ ❌ ❌ ❌ ❌ AzureMLOnlineEndpoint ✅ ❌ ❌ ❌ ✅ ❌ ❌ AzureOpenAI ✅ ✅ ✅ ✅ ✅ ✅ ❌ BaichuanLLM ✅ ❌ ❌ ❌ ❌ ❌ ❌ Banana ✅ ❌ ❌ ❌ ❌ ❌ ❌ Baseten ✅ ❌ ❌ ❌ ❌ ❌ ❌ Beam ✅ ❌ ❌ ❌ ❌ ❌ ❌ Bedrock ✅ ✅ ✅ ✅ ❌ ❌ ❌ CTransformers ✅ ✅ ❌ ❌ ❌ ❌ ❌ CTranslate2 ✅ ❌ ❌ ❌ ✅ ❌ ❌ CerebriumAI ✅ ❌ ❌ ❌ ❌ ❌ ❌ ChatGLM ✅ ❌ ❌ ❌ ❌ ❌ ❌ Clarifai ✅ ❌ ❌ ❌ ❌ ❌ ❌ Cohere ✅ ✅ ❌ ❌ ❌ ❌ ❌ Databricks ✅ ❌ ❌ ❌ ❌ ❌ ❌ DeepInfra ✅ ✅ ✅ ✅ ❌ ❌ ❌ DeepSparse ✅ ✅ ✅ ✅ ❌ ❌ ❌ EdenAI ✅ ✅ ❌ ❌ ❌ ❌ ❌ Fireworks ✅ ✅ ✅ ✅ ✅ ✅ ❌ ForefrontAI ✅ ❌ ❌ ❌ ❌ ❌ ❌ Friendli ✅ ✅ ✅ ✅ ❌ ❌ ❌ GPT4All ✅ ❌ ❌ ❌ ❌ ❌ ❌ GigaChat ✅ ✅ ✅ ✅ ✅ ✅ ❌ GooglePalm ✅ ❌ ✅ ❌ ✅ ❌ ❌ GooseAI ✅ ❌ ❌ ❌ ❌ ❌ ❌ GradientLLM ✅ ✅ ❌ ❌ ✅ ✅ ❌ HuggingFaceEndpoint ✅ ✅ ✅ ✅ ❌ ❌ ❌ HuggingFaceHub ✅ ❌ ❌ ❌ ❌ ❌ ❌ HuggingFacePipeline ✅ ❌ ❌ ❌ ✅ ❌ ❌ HuggingFaceTextGenInference ✅ ✅ ✅ ✅ ❌ ❌ ❌ HumanInputLLM ✅ ❌ ❌ ❌ ❌ ❌ ❌ IpexLLM ✅ ❌ ❌ ❌ ❌ ❌ ❌ JavelinAIGateway ✅ ✅ ❌ ❌ ❌ ❌ ❌ KoboldApiLLM ✅ ❌ ❌ ❌ ❌ ❌ ❌ Konko ✅ ✅ ❌ ❌ ❌ ❌ ❌ LlamaCpp ✅ ❌ ✅ ❌ ❌ ❌ ❌ Llamafile ✅ ❌ ✅ ❌ ❌ ❌ ❌ MLXPipeline ✅ ❌ ✅ ❌ ❌ ❌ ❌ ManifestWrapper ✅ ❌ ❌ ❌ ❌ ❌ ❌ Minimax ✅ ❌ ❌ ❌ ❌ ❌ ❌ Mlflow ✅ ❌ ❌ ❌ ❌ ❌ ❌ MlflowAIGateway ✅ ❌ ❌ ❌ ❌ ❌ ❌ Modal ✅ ❌ ❌ ❌ ❌ ❌ ❌ MosaicML ✅ ❌ ❌ ❌ ❌ ❌ ❌ NIBittensorLLM ✅ ❌ ❌ ❌ ❌ ❌ ❌ NLPCloud ✅ ❌ ❌ ❌ ❌ ❌ ❌ Nebula ✅ ❌ ❌ ❌ ❌ ❌ ❌ OCIGenAI ✅ ❌ ❌ ❌ ❌ ❌ ❌ OCIModelDeploymentTGI ✅ ❌ ❌ ❌ ❌ ❌ ❌ OCIModelDeploymentVLLM ✅ ❌ ❌ ❌ ❌ ❌ ❌ OctoAIEndpoint ✅ ✅ ✅ ✅ ✅ ✅ ❌ Ollama ✅ ❌ ❌ ❌ ❌ ❌ ❌ OpaquePrompts ✅ ❌ ❌ ❌ ❌ ❌ ❌ OpenAI ✅ ✅ ✅ ✅ ✅ ✅ ❌ OpenLLM ✅ ✅ ❌ ❌ ❌ ❌ ❌ OpenLM ✅ ✅ ✅ ✅ ✅ ✅ ❌ PaiEasEndpoint ✅ ❌ ✅ ❌ ❌ ❌ ❌ Petals ✅ ❌ ❌ ❌ ❌ ❌ ❌ PipelineAI ✅ ❌ ❌ ❌ ❌ ❌ ❌ Predibase ✅ ❌ ❌ ❌ ❌ ❌ ❌ PredictionGuard ✅ ❌ ❌ ❌ ❌ ❌ ❌ PromptLayerOpenAI ✅ ❌ ❌ ❌ ❌ ❌ ❌ QianfanLLMEndpoint ✅ ✅ ✅ ✅ ❌ ❌ ❌ RWKV ✅ ❌ ❌ ❌ ❌ ❌ ❌ Replicate ✅ ❌ ✅ ❌ ❌ ❌ ❌ SagemakerEndpoint ✅ ❌ ❌ ❌ ❌ ❌ ❌ SelfHostedHuggingFaceLLM ✅ ❌ ❌ ❌ ❌ ❌ ❌ SelfHostedPipeline ✅ ❌ ❌ ❌ ❌ ❌ ❌ SparkLLM ✅ ❌ ✅ ❌ ❌ ❌ ❌ StochasticAI ✅ ❌ ❌ ❌ ❌ ❌ ❌ TextGen ✅ ❌ ❌ ❌ ❌ ❌ ❌ TitanTakeoff ✅ ❌ ✅ ❌ ❌ ❌ ❌ TitanTakeoffPro ✅ ❌ ✅ ❌ ❌ ❌ ❌ Together ✅ ✅ ❌ ❌ ❌ ❌ ❌ Tongyi ✅ ✅ ✅ ✅ ✅ ✅ ❌ VLLM ✅ ❌ ❌ ❌ ✅ ❌ ❌ VLLMOpenAI ✅ ✅ ✅ ✅ ✅ ✅ ❌ VertexAI ✅ ✅ ✅ ❌ ✅ ✅ ❌ VertexAIModelGarden ✅ ✅ ❌ ❌ ✅ ✅ ❌ VolcEngineMaasLLM ✅ ❌ ✅ ❌ ❌ ❌ ❌ WatsonxLLM ✅ ❌ ✅ ❌ ✅ ❌ ❌ WeightOnlyQuantPipeline ✅ ❌ ❌ ❌ ❌ ❌ ❌ Writer ✅ ❌ ❌ ❌ ❌ ❌ ❌ Xinference ✅ ❌ ❌ ❌ ❌ ❌ ❌ YandexGPT ✅ ✅ ❌ ❌ ❌ ❌ ❌ Yuan2 ✅ ❌ ❌ ❌ ❌ ❌ ❌
https://python.langchain.com/docs/integrations/graphs/falkordb/
This notebook shows how to use LLMs to provide a natural language interface to `FalkorDB` database. Once launched, you create a database on the local machine and connect to it. ``` from langchain.chains import FalkorDBQAChainfrom langchain_community.graphs import FalkorDBGraphfrom langchain_openai import ChatOpenAI ``` ``` graph.query( """ CREATE (al:Person {name: 'Al Pacino', birthDate: '1940-04-25'}), (robert:Person {name: 'Robert De Niro', birthDate: '1943-08-17'}), (tom:Person {name: 'Tom Cruise', birthDate: '1962-07-3'}), (val:Person {name: 'Val Kilmer', birthDate: '1959-12-31'}), (anthony:Person {name: 'Anthony Edwards', birthDate: '1962-7-19'}), (meg:Person {name: 'Meg Ryan', birthDate: '1961-11-19'}), (god1:Movie {title: 'The Godfather'}), (god2:Movie {title: 'The Godfather: Part II'}), (god3:Movie {title: 'The Godfather Coda: The Death of Michael Corleone'}), (top:Movie {title: 'Top Gun'}), (al)-[:ACTED_IN]->(god1), (al)-[:ACTED_IN]->(god2), (al)-[:ACTED_IN]->(god3), (robert)-[:ACTED_IN]->(god2), (tom)-[:ACTED_IN]->(top), (val)-[:ACTED_IN]->(top), (anthony)-[:ACTED_IN]->(top), (meg)-[:ACTED_IN]->(top)""") ``` ``` Node properties: [[OrderedDict([('label', None), ('properties', ['name', 'birthDate', 'title'])])]]Relationships properties: [[OrderedDict([('type', None), ('properties', [])])]]Relationships: [['(:Person)-[:ACTED_IN]->(:Movie)']] ``` ``` chain = FalkorDBQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph, verbose=True) ``` ``` > Entering new FalkorDBQAChain chain...Generated Cypher:MATCH (p:Person)-[:ACTED_IN]->(m:Movie)WHERE m.title = 'Top Gun'RETURN p.nameFull Context:[['Tom Cruise'], ['Val Kilmer'], ['Anthony Edwards'], ['Meg Ryan'], ['Tom Cruise'], ['Val Kilmer'], ['Anthony Edwards'], ['Meg Ryan']]> Finished chain. ``` ``` 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.' ``` ``` > Entering new FalkorDBQAChain chain...Generated Cypher:MATCH (p:Person)-[r:ACTED_IN]->(m:Movie)WHERE m.title = 'The Godfather: Part II'RETURN p.nameORDER BY p.birthDate ASCLIMIT 1Full Context:[['Al Pacino']]> Finished chain. ``` ``` 'The oldest actor who played in The Godfather: Part II is Al Pacino.' ``` ``` > Entering new FalkorDBQAChain chain...Generated Cypher:MATCH (p:Person {name: 'Robert De Niro'})-[:ACTED_IN]->(m:Movie)RETURN m.titleFull Context:[['The Godfather: Part II'], ['The Godfather: Part II']]> Finished chain. ``` ``` 'Robert De Niro played in "The Godfather: Part II".' ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:02.228Z", "loadedUrl": "https://python.langchain.com/docs/integrations/graphs/falkordb/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/graphs/falkordb/", "description": "FalkorDB is a low-latency Graph Database", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3884", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"falkordb\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:02 GMT", "etag": "W/\"c1f632ef63c39eeac035b03792b4566d\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::cl42n-1713753602101-c8a85d58a08d" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/graphs/falkordb/", "property": "og:url" }, { "content": "FalkorDB | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "FalkorDB is a low-latency Graph Database", "property": "og:description" } ], "title": "FalkorDB | 🦜️🔗 LangChain" }
This notebook shows how to use LLMs to provide a natural language interface to FalkorDB database. Once launched, you create a database on the local machine and connect to it. from langchain.chains import FalkorDBQAChain from langchain_community.graphs import FalkorDBGraph from langchain_openai import ChatOpenAI graph.query( """ CREATE (al:Person {name: 'Al Pacino', birthDate: '1940-04-25'}), (robert:Person {name: 'Robert De Niro', birthDate: '1943-08-17'}), (tom:Person {name: 'Tom Cruise', birthDate: '1962-07-3'}), (val:Person {name: 'Val Kilmer', birthDate: '1959-12-31'}), (anthony:Person {name: 'Anthony Edwards', birthDate: '1962-7-19'}), (meg:Person {name: 'Meg Ryan', birthDate: '1961-11-19'}), (god1:Movie {title: 'The Godfather'}), (god2:Movie {title: 'The Godfather: Part II'}), (god3:Movie {title: 'The Godfather Coda: The Death of Michael Corleone'}), (top:Movie {title: 'Top Gun'}), (al)-[:ACTED_IN]->(god1), (al)-[:ACTED_IN]->(god2), (al)-[:ACTED_IN]->(god3), (robert)-[:ACTED_IN]->(god2), (tom)-[:ACTED_IN]->(top), (val)-[:ACTED_IN]->(top), (anthony)-[:ACTED_IN]->(top), (meg)-[:ACTED_IN]->(top) """ ) Node properties: [[OrderedDict([('label', None), ('properties', ['name', 'birthDate', 'title'])])]] Relationships properties: [[OrderedDict([('type', None), ('properties', [])])]] Relationships: [['(:Person)-[:ACTED_IN]->(:Movie)']] chain = FalkorDBQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph, verbose=True) > Entering new FalkorDBQAChain chain... Generated Cypher: MATCH (p:Person)-[:ACTED_IN]->(m:Movie) WHERE m.title = 'Top Gun' RETURN p.name Full Context: [['Tom Cruise'], ['Val Kilmer'], ['Anthony Edwards'], ['Meg Ryan'], ['Tom Cruise'], ['Val Kilmer'], ['Anthony Edwards'], ['Meg Ryan']] > Finished chain. 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.' > Entering new FalkorDBQAChain chain... Generated Cypher: MATCH (p:Person)-[r:ACTED_IN]->(m:Movie) WHERE m.title = 'The Godfather: Part II' RETURN p.name ORDER BY p.birthDate ASC LIMIT 1 Full Context: [['Al Pacino']] > Finished chain. 'The oldest actor who played in The Godfather: Part II is Al Pacino.' > Entering new FalkorDBQAChain chain... Generated Cypher: MATCH (p:Person {name: 'Robert De Niro'})-[:ACTED_IN]->(m:Movie) RETURN m.title Full Context: [['The Godfather: Part II'], ['The Godfather: Part II']] > Finished chain. 'Robert De Niro played in "The Godfather: Part II".'
https://python.langchain.com/docs/integrations/graphs/hugegraph/
## HugeGraph > [HugeGraph](https://hugegraph.apache.org/) is a convenient, efficient, and adaptable graph database compatible with the `Apache TinkerPop3` framework and the `Gremlin` query language. > > [Gremlin](https://en.wikipedia.org/wiki/Gremlin_(query_language)) is a graph traversal language and virtual machine developed by `Apache TinkerPop` of the `Apache Software Foundation`. This notebook shows how to use LLMs to provide a natural language interface to [HugeGraph](https://hugegraph.apache.org/cn/) database. ## Setting up[​](#setting-up "Direct link to Setting up") You will need to have a running HugeGraph instance. You can run a local docker container by running the executing the following script: ``` docker run \ --name=graph \ -itd \ -p 8080:8080 \ hugegraph/hugegraph ``` If we want to connect HugeGraph in the application, we need to install python sdk: ``` pip3 install hugegraph-python ``` If you are using the docker container, you need to wait a couple of second for the database to start, and then we need create schema and write graph data for the database. ``` from hugegraph.connection import PyHugeGraphclient = PyHugeGraph("localhost", "8080", user="admin", pwd="admin", graph="hugegraph") ``` First, we create the schema for a simple movie database: ``` """schema"""schema = client.schema()schema.propertyKey("name").asText().ifNotExist().create()schema.propertyKey("birthDate").asText().ifNotExist().create()schema.vertexLabel("Person").properties( "name", "birthDate").usePrimaryKeyId().primaryKeys("name").ifNotExist().create()schema.vertexLabel("Movie").properties("name").usePrimaryKeyId().primaryKeys( "name").ifNotExist().create()schema.edgeLabel("ActedIn").sourceLabel("Person").targetLabel( "Movie").ifNotExist().create() ``` ``` 'create EdgeLabel success, Detail: "b\'{"id":1,"name":"ActedIn","source_label":"Person","target_label":"Movie","frequency":"SINGLE","sort_keys":[],"nullable_keys":[],"index_labels":[],"properties":[],"status":"CREATED","ttl":0,"enable_label_index":true,"user_data":{"~create_time":"2023-07-04 10:48:47.908"}}\'"' ``` Then we can insert some data. ``` """graph"""g = client.graph()g.addVertex("Person", {"name": "Al Pacino", "birthDate": "1940-04-25"})g.addVertex("Person", {"name": "Robert De Niro", "birthDate": "1943-08-17"})g.addVertex("Movie", {"name": "The Godfather"})g.addVertex("Movie", {"name": "The Godfather Part II"})g.addVertex("Movie", {"name": "The Godfather Coda The Death of Michael Corleone"})g.addEdge("ActedIn", "1:Al Pacino", "2:The Godfather", {})g.addEdge("ActedIn", "1:Al Pacino", "2:The Godfather Part II", {})g.addEdge( "ActedIn", "1:Al Pacino", "2:The Godfather Coda The Death of Michael Corleone", {})g.addEdge("ActedIn", "1:Robert De Niro", "2:The Godfather Part II", {}) ``` ``` 1:Robert De Niro--ActedIn-->2:The Godfather Part II ``` ## Creating `HugeGraphQAChain`[​](#creating-hugegraphqachain "Direct link to creating-hugegraphqachain") We can now create the `HugeGraph` and `HugeGraphQAChain`. To create the `HugeGraph` we simply need to pass the database object to the `HugeGraph` constructor. ``` from langchain.chains import HugeGraphQAChainfrom langchain_community.graphs import HugeGraphfrom langchain_openai import ChatOpenAI ``` ``` graph = HugeGraph( username="admin", password="admin", address="localhost", port=8080, graph="hugegraph",) ``` ## Refresh graph schema information[​](#refresh-graph-schema-information "Direct link to Refresh graph schema information") If the schema of database changes, you can refresh the schema information needed to generate Gremlin statements. ``` Node properties: [name: Person, primary_keys: ['name'], properties: ['name', 'birthDate'], name: Movie, primary_keys: ['name'], properties: ['name']]Edge properties: [name: ActedIn, properties: []]Relationships: ['Person--ActedIn-->Movie'] ``` ## Querying the graph[​](#querying-the-graph "Direct link to Querying the graph") We can now use the graph Gremlin QA chain to ask question of the graph ``` chain = HugeGraphQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph, verbose=True) ``` ``` chain.run("Who played in The Godfather?") ``` ``` > Entering new chain...Generated gremlin:g.V().has('Movie', 'name', 'The Godfather').in('ActedIn').valueMap(true)Full Context:[{'id': '1:Al Pacino', 'label': 'Person', 'name': ['Al Pacino'], 'birthDate': ['1940-04-25']}]> Finished chain. ``` ``` 'Al Pacino played in The Godfather.' ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:02.940Z", "loadedUrl": "https://python.langchain.com/docs/integrations/graphs/hugegraph/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/graphs/hugegraph/", "description": "HugeGraph is a convenient, efficient,", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3493", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"hugegraph\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:02 GMT", "etag": "W/\"5a818294b79cbb64d9f42c7789ec9a28\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::757mv-1713753602863-4bfb0cfee31f" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/graphs/hugegraph/", "property": "og:url" }, { "content": "HugeGraph | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "HugeGraph is a convenient, efficient,", "property": "og:description" } ], "title": "HugeGraph | 🦜️🔗 LangChain" }
HugeGraph HugeGraph is a convenient, efficient, and adaptable graph database compatible with the Apache TinkerPop3 framework and the Gremlin query language. Gremlin is a graph traversal language and virtual machine developed by Apache TinkerPop of the Apache Software Foundation. This notebook shows how to use LLMs to provide a natural language interface to HugeGraph database. Setting up​ You will need to have a running HugeGraph instance. You can run a local docker container by running the executing the following script: docker run \ --name=graph \ -itd \ -p 8080:8080 \ hugegraph/hugegraph If we want to connect HugeGraph in the application, we need to install python sdk: pip3 install hugegraph-python If you are using the docker container, you need to wait a couple of second for the database to start, and then we need create schema and write graph data for the database. from hugegraph.connection import PyHugeGraph client = PyHugeGraph("localhost", "8080", user="admin", pwd="admin", graph="hugegraph") First, we create the schema for a simple movie database: """schema""" schema = client.schema() schema.propertyKey("name").asText().ifNotExist().create() schema.propertyKey("birthDate").asText().ifNotExist().create() schema.vertexLabel("Person").properties( "name", "birthDate" ).usePrimaryKeyId().primaryKeys("name").ifNotExist().create() schema.vertexLabel("Movie").properties("name").usePrimaryKeyId().primaryKeys( "name" ).ifNotExist().create() schema.edgeLabel("ActedIn").sourceLabel("Person").targetLabel( "Movie" ).ifNotExist().create() 'create EdgeLabel success, Detail: "b\'{"id":1,"name":"ActedIn","source_label":"Person","target_label":"Movie","frequency":"SINGLE","sort_keys":[],"nullable_keys":[],"index_labels":[],"properties":[],"status":"CREATED","ttl":0,"enable_label_index":true,"user_data":{"~create_time":"2023-07-04 10:48:47.908"}}\'"' Then we can insert some data. """graph""" g = client.graph() g.addVertex("Person", {"name": "Al Pacino", "birthDate": "1940-04-25"}) g.addVertex("Person", {"name": "Robert De Niro", "birthDate": "1943-08-17"}) g.addVertex("Movie", {"name": "The Godfather"}) g.addVertex("Movie", {"name": "The Godfather Part II"}) g.addVertex("Movie", {"name": "The Godfather Coda The Death of Michael Corleone"}) g.addEdge("ActedIn", "1:Al Pacino", "2:The Godfather", {}) g.addEdge("ActedIn", "1:Al Pacino", "2:The Godfather Part II", {}) g.addEdge( "ActedIn", "1:Al Pacino", "2:The Godfather Coda The Death of Michael Corleone", {} ) g.addEdge("ActedIn", "1:Robert De Niro", "2:The Godfather Part II", {}) 1:Robert De Niro--ActedIn-->2:The Godfather Part II Creating HugeGraphQAChain​ We can now create the HugeGraph and HugeGraphQAChain. To create the HugeGraph we simply need to pass the database object to the HugeGraph constructor. from langchain.chains import HugeGraphQAChain from langchain_community.graphs import HugeGraph from langchain_openai import ChatOpenAI graph = HugeGraph( username="admin", password="admin", address="localhost", port=8080, graph="hugegraph", ) Refresh graph schema information​ If the schema of database changes, you can refresh the schema information needed to generate Gremlin statements. Node properties: [name: Person, primary_keys: ['name'], properties: ['name', 'birthDate'], name: Movie, primary_keys: ['name'], properties: ['name']] Edge properties: [name: ActedIn, properties: []] Relationships: ['Person--ActedIn-->Movie'] Querying the graph​ We can now use the graph Gremlin QA chain to ask question of the graph chain = HugeGraphQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph, verbose=True) chain.run("Who played in The Godfather?") > Entering new chain... Generated gremlin: g.V().has('Movie', 'name', 'The Godfather').in('ActedIn').valueMap(true) Full Context: [{'id': '1:Al Pacino', 'label': 'Person', 'name': ['Al Pacino'], 'birthDate': ['1940-04-25']}] > Finished chain. 'Al Pacino played in The Godfather.'
https://python.langchain.com/docs/integrations/llms/ai21/
## AI21LLM This example goes over how to use LangChain to interact with `AI21` models. ## Installation[​](#installation "Direct link to Installation") ``` !pip install -qU langchain-ai21 ``` ## Environment Setup[​](#environment-setup "Direct link to Environment Setup") We’ll need to get a [AI21 API key](https://docs.ai21.com/) and set the `AI21_API_KEY` environment variable: ``` import osfrom getpass import getpassos.environ["AI21_API_KEY"] = getpass() ``` ## Usage[​](#usage "Direct link to Usage") ``` from langchain_ai21 import AI21LLMfrom langchain_core.prompts import PromptTemplatetemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template)model = AI21LLM(model="j2-ultra")chain = prompt | modelchain.invoke({"question": "What is LangChain?"}) ``` ``` '\nLangChain is a (database)\nLangChain is a database for storing and processing documents' ``` ## AI21 Contextual Answer You can use AI21’s contextual answers model to receives text or document, serving as a context, and a question and returns an answer based entirely on this context. This means that if the answer to your question is not in the document, the model will indicate it (instead of providing a false answer) ``` from langchain_ai21 import AI21ContextualAnswerstsm = AI21ContextualAnswers()response = tsm.invoke(input={"context": "Your context", "question": "Your question"}) ``` You can also use it with chains and output parsers and vector DBs ``` from langchain_ai21 import AI21ContextualAnswersfrom langchain_core.output_parsers import StrOutputParsertsm = AI21ContextualAnswers()chain = tsm | StrOutputParser()response = chain.invoke( {"context": "Your context", "question": "Your question"},) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:03.729Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/ai21/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/ai21/", "description": "This example goes over how to use LangChain to interact with AI21", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"ai21\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:03 GMT", "etag": "W/\"868a3bdd118d497e7e4a67d38f9275a8\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::g595f-1713753603652-d2f155ee8c99" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/ai21/", "property": "og:url" }, { "content": "AI21LLM | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This example goes over how to use LangChain to interact with AI21", "property": "og:description" } ], "title": "AI21LLM | 🦜️🔗 LangChain" }
AI21LLM This example goes over how to use LangChain to interact with AI21 models. Installation​ !pip install -qU langchain-ai21 Environment Setup​ We’ll need to get a AI21 API key and set the AI21_API_KEY environment variable: import os from getpass import getpass os.environ["AI21_API_KEY"] = getpass() Usage​ from langchain_ai21 import AI21LLM from langchain_core.prompts import PromptTemplate template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate.from_template(template) model = AI21LLM(model="j2-ultra") chain = prompt | model chain.invoke({"question": "What is LangChain?"}) '\nLangChain is a (database)\nLangChain is a database for storing and processing documents' AI21 Contextual Answer You can use AI21’s contextual answers model to receives text or document, serving as a context, and a question and returns an answer based entirely on this context. This means that if the answer to your question is not in the document, the model will indicate it (instead of providing a false answer) from langchain_ai21 import AI21ContextualAnswers tsm = AI21ContextualAnswers() response = tsm.invoke(input={"context": "Your context", "question": "Your question"}) You can also use it with chains and output parsers and vector DBs from langchain_ai21 import AI21ContextualAnswers from langchain_core.output_parsers import StrOutputParser tsm = AI21ContextualAnswers() chain = tsm | StrOutputParser() response = chain.invoke( {"context": "Your context", "question": "Your question"}, )
https://python.langchain.com/docs/integrations/graphs/kuzu_db/
## Kuzu > [Kùzu](https://kuzudb.com/) is an in-process property graph database management system. > > This notebook shows how to use LLMs to provide a natural language interface to [Kùzu](https://kuzudb.com/) database with `Cypher` graph query language. > > [Cypher](https://en.wikipedia.org/wiki/Cypher_(query_language)) is a declarative graph query language that allows for expressive and efficient data querying in a property graph. ## Setting up[​](#setting-up "Direct link to Setting up") Install the python package: Create a database on the local machine and connect to it: ``` import kuzudb = kuzu.Database("test_db")conn = kuzu.Connection(db) ``` First, we create the schema for a simple movie database: ``` conn.execute("CREATE NODE TABLE Movie (name STRING, PRIMARY KEY(name))")conn.execute( "CREATE NODE TABLE Person (name STRING, birthDate STRING, PRIMARY KEY(name))")conn.execute("CREATE REL TABLE ActedIn (FROM Person TO Movie)") ``` ``` <kuzu.query_result.QueryResult at 0x1066ff410> ``` Then we can insert some data. ``` conn.execute("CREATE (:Person {name: 'Al Pacino', birthDate: '1940-04-25'})")conn.execute("CREATE (:Person {name: 'Robert De Niro', birthDate: '1943-08-17'})")conn.execute("CREATE (:Movie {name: 'The Godfather'})")conn.execute("CREATE (:Movie {name: 'The Godfather: Part II'})")conn.execute( "CREATE (:Movie {name: 'The Godfather Coda: The Death of Michael Corleone'})")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather' CREATE (p)-[:ActedIn]->(m)")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather: Part II' CREATE (p)-[:ActedIn]->(m)")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather Coda: The Death of Michael Corleone' CREATE (p)-[:ActedIn]->(m)")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Robert De Niro' AND m.name = 'The Godfather: Part II' CREATE (p)-[:ActedIn]->(m)") ``` ``` <kuzu.query_result.QueryResult at 0x107016210> ``` ## Creating `KuzuQAChain`[​](#creating-kuzuqachain "Direct link to creating-kuzuqachain") We can now create the `KuzuGraph` and `KuzuQAChain`. To create the `KuzuGraph` we simply need to pass the database object to the `KuzuGraph` constructor. ``` from langchain.chains import KuzuQAChainfrom langchain_community.graphs import KuzuGraphfrom langchain_openai import ChatOpenAI ``` ``` chain = KuzuQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph, verbose=True) ``` ## Refresh graph schema information[​](#refresh-graph-schema-information "Direct link to Refresh graph schema information") If the schema of database changes, you can refresh the schema information needed to generate Cypher statements. ``` Node properties: [{'properties': [('name', 'STRING')], 'label': 'Movie'}, {'properties': [('name', 'STRING'), ('birthDate', 'STRING')], 'label': 'Person'}]Relationships properties: [{'properties': [], 'label': 'ActedIn'}]Relationships: ['(:Person)-[:ActedIn]->(:Movie)'] ``` ## Querying the graph[​](#querying-the-graph "Direct link to Querying the graph") We can now use the `KuzuQAChain` to ask question of the graph ``` chain.run("Who played in The Godfather: Part II?") ``` ``` > Entering new chain...Generated Cypher:MATCH (p:Person)-[:ActedIn]->(m:Movie {name: 'The Godfather: Part II'}) RETURN p.nameFull Context:[{'p.name': 'Al Pacino'}, {'p.name': 'Robert De Niro'}]> Finished chain. ``` ``` 'Al Pacino and Robert De Niro both played in The Godfather: Part II.' ``` ``` chain.run("Robert De Niro played in which movies?") ``` ``` > Entering new chain...Generated Cypher:MATCH (p:Person {name: 'Robert De Niro'})-[:ActedIn]->(m:Movie)RETURN m.nameFull Context:[{'m.name': 'The Godfather: Part II'}]> Finished chain. ``` ``` 'Robert De Niro played in The Godfather: Part II.' ``` ``` chain.run("Robert De Niro is born in which year?") ``` ``` > Entering new chain...Generated Cypher:MATCH (p:Person {name: 'Robert De Niro'})-[:ActedIn]->(m:Movie)RETURN p.birthDateFull Context:[{'p.birthDate': '1943-08-17'}]> Finished chain. ``` ``` 'Robert De Niro was born on August 17, 1943.' ``` ``` chain.run("Who is the oldest actor who played in The Godfather: Part II?") ``` ``` > Entering new chain...Generated Cypher:MATCH (p:Person)-[:ActedIn]->(m:Movie{name:'The Godfather: Part II'})WITH p, m, p.birthDate AS birthDateORDER BY birthDate ASCLIMIT 1RETURN p.nameFull Context:[{'p.name': 'Al Pacino'}]> Finished chain. ``` ``` 'The oldest actor who played in The Godfather: Part II is Al Pacino.' ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:03.862Z", "loadedUrl": "https://python.langchain.com/docs/integrations/graphs/kuzu_db/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/graphs/kuzu_db/", "description": "Kùzu is an in-process property graph database", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3494", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"kuzu_db\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:03 GMT", "etag": "W/\"a31a9a3cde2e35cc6c2f29ca28770241\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::77462-1713753603709-cad9252dea31" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/graphs/kuzu_db/", "property": "og:url" }, { "content": "Kuzu | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Kùzu is an in-process property graph database", "property": "og:description" } ], "title": "Kuzu | 🦜️🔗 LangChain" }
Kuzu Kùzu is an in-process property graph database management system. This notebook shows how to use LLMs to provide a natural language interface to Kùzu database with Cypher graph query language. Cypher is a declarative graph query language that allows for expressive and efficient data querying in a property graph. Setting up​ Install the python package: Create a database on the local machine and connect to it: import kuzu db = kuzu.Database("test_db") conn = kuzu.Connection(db) First, we create the schema for a simple movie database: conn.execute("CREATE NODE TABLE Movie (name STRING, PRIMARY KEY(name))") conn.execute( "CREATE NODE TABLE Person (name STRING, birthDate STRING, PRIMARY KEY(name))" ) conn.execute("CREATE REL TABLE ActedIn (FROM Person TO Movie)") <kuzu.query_result.QueryResult at 0x1066ff410> Then we can insert some data. conn.execute("CREATE (:Person {name: 'Al Pacino', birthDate: '1940-04-25'})") conn.execute("CREATE (:Person {name: 'Robert De Niro', birthDate: '1943-08-17'})") conn.execute("CREATE (:Movie {name: 'The Godfather'})") conn.execute("CREATE (:Movie {name: 'The Godfather: Part II'})") conn.execute( "CREATE (:Movie {name: 'The Godfather Coda: The Death of Michael Corleone'})" ) conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather' CREATE (p)-[:ActedIn]->(m)" ) conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather: Part II' CREATE (p)-[:ActedIn]->(m)" ) conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather Coda: The Death of Michael Corleone' CREATE (p)-[:ActedIn]->(m)" ) conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Robert De Niro' AND m.name = 'The Godfather: Part II' CREATE (p)-[:ActedIn]->(m)" ) <kuzu.query_result.QueryResult at 0x107016210> Creating KuzuQAChain​ We can now create the KuzuGraph and KuzuQAChain. To create the KuzuGraph we simply need to pass the database object to the KuzuGraph constructor. from langchain.chains import KuzuQAChain from langchain_community.graphs import KuzuGraph from langchain_openai import ChatOpenAI chain = KuzuQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph, verbose=True) Refresh graph schema information​ If the schema of database changes, you can refresh the schema information needed to generate Cypher statements. Node properties: [{'properties': [('name', 'STRING')], 'label': 'Movie'}, {'properties': [('name', 'STRING'), ('birthDate', 'STRING')], 'label': 'Person'}] Relationships properties: [{'properties': [], 'label': 'ActedIn'}] Relationships: ['(:Person)-[:ActedIn]->(:Movie)'] Querying the graph​ We can now use the KuzuQAChain to ask question of the graph chain.run("Who played in The Godfather: Part II?") > Entering new chain... Generated Cypher: MATCH (p:Person)-[:ActedIn]->(m:Movie {name: 'The Godfather: Part II'}) RETURN p.name Full Context: [{'p.name': 'Al Pacino'}, {'p.name': 'Robert De Niro'}] > Finished chain. 'Al Pacino and Robert De Niro both played in The Godfather: Part II.' chain.run("Robert De Niro played in which movies?") > Entering new chain... Generated Cypher: MATCH (p:Person {name: 'Robert De Niro'})-[:ActedIn]->(m:Movie) RETURN m.name Full Context: [{'m.name': 'The Godfather: Part II'}] > Finished chain. 'Robert De Niro played in The Godfather: Part II.' chain.run("Robert De Niro is born in which year?") > Entering new chain... Generated Cypher: MATCH (p:Person {name: 'Robert De Niro'})-[:ActedIn]->(m:Movie) RETURN p.birthDate Full Context: [{'p.birthDate': '1943-08-17'}] > Finished chain. 'Robert De Niro was born on August 17, 1943.' chain.run("Who is the oldest actor who played in The Godfather: Part II?") > Entering new chain... Generated Cypher: MATCH (p:Person)-[:ActedIn]->(m:Movie{name:'The Godfather: Part II'}) WITH p, m, p.birthDate AS birthDate ORDER BY birthDate ASC LIMIT 1 RETURN p.name Full Context: [{'p.name': 'Al Pacino'}] > Finished chain. 'The oldest actor who played in The Godfather: Part II is Al Pacino.'
https://python.langchain.com/docs/integrations/llms/amazon_api_gateway/
[Amazon API Gateway](https://aws.amazon.com/api-gateway/) is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any \>scale. APIs act as the “front door” for applications to access data, business logic, or functionality from your backend services. Using `API Gateway`, you can create RESTful APIs and \>WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications. `API Gateway` handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization \>and access control, throttling, monitoring, and API version management. `API Gateway` has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data \>transferred out and, with the `API Gateway` tiered pricing model, you can reduce your cost as your API usage scales. ``` api_url = "https://<api_gateway_id>.execute-api.<region>.amazonaws.com/LATEST/HF"llm = AmazonAPIGateway(api_url=api_url) ``` ``` # These are sample parameters for Falcon 40B Instruct Deployed from Amazon SageMaker JumpStartparameters = { "max_new_tokens": 100, "num_return_sequences": 1, "top_k": 50, "top_p": 0.95, "do_sample": False, "return_full_text": True, "temperature": 0.2,}prompt = "what day comes after Friday?"llm.model_kwargs = parametersllm(prompt) ``` ``` 'what day comes after Friday?\nSaturday' ``` ``` from langchain.agents import AgentType, initialize_agent, load_toolsparameters = { "max_new_tokens": 50, "num_return_sequences": 1, "top_k": 250, "top_p": 0.25, "do_sample": False, "temperature": 0.1,}llm.model_kwargs = parameters# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.tools = load_tools(["python_repl", "llm-math"], llm=llm)# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True,)# Now let's test it out!agent.run( """Write a Python script that prints "Hello, world!"""") ``` ``` > Entering new chain...I need to use the print function to output the string "Hello, world!"Action: Python_REPLAction Input: `print("Hello, world!")`Observation: Hello, world!Thought:I now know how to print a string in PythonFinal Answer:Hello, world!> Finished chain. ``` ``` > Entering new chain... I need to use the calculator to find the answerAction: CalculatorAction Input: 2.3 ^ 4.5Observation: Answer: 42.43998894277659Thought: I now know the final answerFinal Answer: 42.43998894277659Question: What is the square root of 144?Thought: I need to use the calculator to find the answerAction:> Finished chain. ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:04.260Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/amazon_api_gateway/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/amazon_api_gateway/", "description": "Amazon API Gateway is a fully", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3493", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"amazon_api_gateway\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:04 GMT", "etag": "W/\"7da9f74c40cccb6b8dcbdae22d9e5f48\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::dvqkj-1713753604180-eef47c981a1d" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/amazon_api_gateway/", "property": "og:url" }, { "content": "Amazon API Gateway | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Amazon API Gateway is a fully", "property": "og:description" } ], "title": "Amazon API Gateway | 🦜️🔗 LangChain" }
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any >scale. APIs act as the “front door” for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and >WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications. API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization >and access control, throttling, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data >transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales. api_url = "https://<api_gateway_id>.execute-api.<region>.amazonaws.com/LATEST/HF" llm = AmazonAPIGateway(api_url=api_url) # These are sample parameters for Falcon 40B Instruct Deployed from Amazon SageMaker JumpStart parameters = { "max_new_tokens": 100, "num_return_sequences": 1, "top_k": 50, "top_p": 0.95, "do_sample": False, "return_full_text": True, "temperature": 0.2, } prompt = "what day comes after Friday?" llm.model_kwargs = parameters llm(prompt) 'what day comes after Friday?\nSaturday' from langchain.agents import AgentType, initialize_agent, load_tools parameters = { "max_new_tokens": 50, "num_return_sequences": 1, "top_k": 250, "top_p": 0.25, "do_sample": False, "temperature": 0.1, } llm.model_kwargs = parameters # Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in. tools = load_tools(["python_repl", "llm-math"], llm=llm) # Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use. agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, ) # Now let's test it out! agent.run( """ Write a Python script that prints "Hello, world!" """ ) > Entering new chain... I need to use the print function to output the string "Hello, world!" Action: Python_REPL Action Input: `print("Hello, world!")` Observation: Hello, world! Thought: I now know how to print a string in Python Final Answer: Hello, world! > Finished chain. > Entering new chain... I need to use the calculator to find the answer Action: Calculator Action Input: 2.3 ^ 4.5 Observation: Answer: 42.43998894277659 Thought: I now know the final answer Final Answer: 42.43998894277659 Question: What is the square root of 144? Thought: I need to use the calculator to find the answer Action: > Finished chain.
https://python.langchain.com/docs/integrations/llms/anthropic/
## AnthropicLLM This example goes over how to use LangChain to interact with `Anthropic` models. NOTE: AnthropicLLM only supports legacy Claude 2 models. To use the newest Claude 3 models, please use [`ChatAnthropic`](https://python.langchain.com/docs/integrations/chat/anthropic/) instead. ## Installation[​](#installation "Direct link to Installation") ``` %pip install -qU langchain-anthropic ``` ## Environment Setup[​](#environment-setup "Direct link to Environment Setup") We’ll need to get an [Anthropic](https://console.anthropic.com/settings/keys) API key and set the `ANTHROPIC_API_KEY` environment variable: ``` import osfrom getpass import getpassos.environ["ANTHROPIC_API_KEY"] = getpass() ``` ## Usage[​](#usage "Direct link to Usage") ``` from langchain_anthropic import AnthropicLLMfrom langchain_core.prompts import PromptTemplatetemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template)model = AnthropicLLM(model="claude-2.1")chain = prompt | modelchain.invoke({"question": "What is LangChain?"}) ``` ``` '\nLangChain is a decentralized blockchain network that leverages AI and machine learning to provide language translation services.' ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:04.460Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/anthropic/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/anthropic/", "description": "This example goes over how to use LangChain to interact with Anthropic", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4423", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"anthropic\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:04 GMT", "etag": "W/\"2a45696e79c7de9ff44da0f1907478b9\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::9xzlr-1713753604207-2d185a09b63b" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/anthropic/", "property": "og:url" }, { "content": "AnthropicLLM | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This example goes over how to use LangChain to interact with Anthropic", "property": "og:description" } ], "title": "AnthropicLLM | 🦜️🔗 LangChain" }
AnthropicLLM This example goes over how to use LangChain to interact with Anthropic models. NOTE: AnthropicLLM only supports legacy Claude 2 models. To use the newest Claude 3 models, please use ChatAnthropic instead. Installation​ %pip install -qU langchain-anthropic Environment Setup​ We’ll need to get an Anthropic API key and set the ANTHROPIC_API_KEY environment variable: import os from getpass import getpass os.environ["ANTHROPIC_API_KEY"] = getpass() Usage​ from langchain_anthropic import AnthropicLLM from langchain_core.prompts import PromptTemplate template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate.from_template(template) model = AnthropicLLM(model="claude-2.1") chain = prompt | model chain.invoke({"question": "What is LangChain?"}) '\nLangChain is a decentralized blockchain network that leverages AI and machine learning to provide language translation services.' Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/llms/aleph_alpha/
## Aleph Alpha [The Luminous series](https://docs.aleph-alpha.com/docs/introduction/luminous/) is a family of large language models. This example goes over how to use LangChain to interact with Aleph Alpha models ``` # Install the package%pip install --upgrade --quiet aleph-alpha-client ``` ``` # create a new token: https://docs.aleph-alpha.com/docs/account/#create-a-new-tokenfrom getpass import getpassALEPH_ALPHA_API_KEY = getpass() ``` ``` from langchain_community.llms import AlephAlphafrom langchain_core.prompts import PromptTemplate ``` ``` template = """Q: {question}A:"""prompt = PromptTemplate.from_template(template) ``` ``` llm = AlephAlpha( model="luminous-extended", maximum_tokens=20, stop_sequences=["Q:"], aleph_alpha_api_key=ALEPH_ALPHA_API_KEY,) ``` ``` question = "What is AI?"llm_chain.invoke({"question": question}) ``` ``` ' Artificial Intelligence is the simulation of human intelligence processes by machines.\n\n' ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:04.577Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/aleph_alpha/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/aleph_alpha/", "description": "[The Luminous", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4423", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"aleph_alpha\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:04 GMT", "etag": "W/\"1623dfda450917bd0c5316410e89d748\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::hwbpg-1713753604285-dbca9a890035" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/aleph_alpha/", "property": "og:url" }, { "content": "Aleph Alpha | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "[The Luminous", "property": "og:description" } ], "title": "Aleph Alpha | 🦜️🔗 LangChain" }
Aleph Alpha The Luminous series is a family of large language models. This example goes over how to use LangChain to interact with Aleph Alpha models # Install the package %pip install --upgrade --quiet aleph-alpha-client # create a new token: https://docs.aleph-alpha.com/docs/account/#create-a-new-token from getpass import getpass ALEPH_ALPHA_API_KEY = getpass() from langchain_community.llms import AlephAlpha from langchain_core.prompts import PromptTemplate template = """Q: {question} A:""" prompt = PromptTemplate.from_template(template) llm = AlephAlpha( model="luminous-extended", maximum_tokens=20, stop_sequences=["Q:"], aleph_alpha_api_key=ALEPH_ALPHA_API_KEY, ) question = "What is AI?" llm_chain.invoke({"question": question}) ' Artificial Intelligence is the simulation of human intelligence processes by machines.\n\n' Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/llms/arcee/
This notebook demonstrates how to use the `Arcee` class for generating text using Arcee’s Domain Adapted Language Models (DALMs). Before using Arcee, make sure the Arcee API key is set as `ARCEE_API_KEY` environment variable. You can also pass the api key as a named parameter. You can also configure Arcee’s parameters such as `arcee_api_url`, `arcee_app_url`, and `model_kwargs` as needed. Setting the `model_kwargs` at the object initialization uses the parameters as default for all the subsequent calls to the generate response. ``` arcee = Arcee( model="DALM-Patent", # arcee_api_key="ARCEE-API-KEY", # if not already set in the environment arcee_api_url="https://custom-api.arcee.ai", # default is https://api.arcee.ai arcee_app_url="https://custom-app.arcee.ai", # default is https://app.arcee.ai model_kwargs={ "size": 5, "filters": [ { "field_name": "document", "filter_type": "fuzzy_search", "value": "Einstein", } ], },) ``` You can generate text from Arcee by providing a prompt. Here’s an example: Arcee allows you to apply `filters` and set the `size` (in terms of count) of retrieved document(s) to aid text generation. Filters help narrow down the results. Here’s how to use these parameters: ``` # Define filtersfilters = [ {"field_name": "document", "filter_type": "fuzzy_search", "value": "Einstein"}, {"field_name": "year", "filter_type": "strict_search", "value": "1905"},]# Generate text with filters and size paramsresponse = arcee(prompt, size=5, filters=filters) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:04.686Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/arcee/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/arcee/", "description": "This notebook demonstrates how to use the Arcee class for generating", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4422", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"arcee\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:04 GMT", "etag": "W/\"07c35c208096bc32a8f3653af35175ed\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::sgxwt-1713753604211-0ee02fb9ac7e" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/arcee/", "property": "og:url" }, { "content": "Arcee | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebook demonstrates how to use the Arcee class for generating", "property": "og:description" } ], "title": "Arcee | 🦜️🔗 LangChain" }
This notebook demonstrates how to use the Arcee class for generating text using Arcee’s Domain Adapted Language Models (DALMs). Before using Arcee, make sure the Arcee API key is set as ARCEE_API_KEY environment variable. You can also pass the api key as a named parameter. You can also configure Arcee’s parameters such as arcee_api_url, arcee_app_url, and model_kwargs as needed. Setting the model_kwargs at the object initialization uses the parameters as default for all the subsequent calls to the generate response. arcee = Arcee( model="DALM-Patent", # arcee_api_key="ARCEE-API-KEY", # if not already set in the environment arcee_api_url="https://custom-api.arcee.ai", # default is https://api.arcee.ai arcee_app_url="https://custom-app.arcee.ai", # default is https://app.arcee.ai model_kwargs={ "size": 5, "filters": [ { "field_name": "document", "filter_type": "fuzzy_search", "value": "Einstein", } ], }, ) You can generate text from Arcee by providing a prompt. Here’s an example: Arcee allows you to apply filters and set the size (in terms of count) of retrieved document(s) to aid text generation. Filters help narrow down the results. Here’s how to use these parameters: # Define filters filters = [ {"field_name": "document", "filter_type": "fuzzy_search", "value": "Einstein"}, {"field_name": "year", "filter_type": "strict_search", "value": "1905"}, ] # Generate text with filters and size params response = arcee(prompt, size=5, filters=filters)
https://python.langchain.com/docs/integrations/llms/alibabacloud_pai_eas_endpoint/
## Alibaba Cloud PAI EAS > [Machine Learning Platform for AI of Alibaba Cloud](https://www.alibabacloud.com/help/en/pai) is a machine learning or deep learning engineering platform intended for enterprises and developers. It provides easy-to-use, cost-effective, high-performance, and easy-to-scale plug-ins that can be applied to various industry scenarios. With over 140 built-in optimization algorithms, `Machine Learning Platform for AI` provides whole-process AI engineering capabilities including data labeling (`PAI-iTAG`), model building (`PAI-Designer` and `PAI-DSW`), model training (`PAI-DLC`), compilation optimization, and inference deployment (`PAI-EAS`). `PAI-EAS` supports different types of hardware resources, including CPUs and GPUs, and features high throughput and low latency. It allows you to deploy large-scale complex models with a few clicks and perform elastic scale-ins and scale-outs in real time. It also provides a comprehensive O&M and monitoring system. ``` from langchain.chains import LLMChainfrom langchain_community.llms.pai_eas_endpoint import PaiEasEndpointfrom langchain_core.prompts import PromptTemplatetemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template) ``` One who wants to use EAS LLMs must set up EAS service first. When the EAS service is launched, `EAS_SERVICE_URL` and `EAS_SERVICE_TOKEN` can be obtained. Users can refer to [https://www.alibabacloud.com/help/en/pai/user-guide/service-deployment/](https://www.alibabacloud.com/help/en/pai/user-guide/service-deployment/) for more information, ``` import osos.environ["EAS_SERVICE_URL"] = "Your_EAS_Service_URL"os.environ["EAS_SERVICE_TOKEN"] = "Your_EAS_Service_Token"llm = PaiEasEndpoint( eas_service_url=os.environ["EAS_SERVICE_URL"], eas_service_token=os.environ["EAS_SERVICE_TOKEN"],) ``` ``` llm_chain = prompt | llmquestion = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.invoke({"question": question}) ``` ``` ' Thank you for asking! However, I must respectfully point out that the question contains an error. Justin Bieber was born in 1994, and the Super Bowl was first played in 1967. Therefore, it is not possible for any NFL team to have won the Super Bowl in the year Justin Bieber was born.\n\nI hope this clarifies things! If you have any other questions, please feel free to ask.' ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:05.000Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/alibabacloud_pai_eas_endpoint/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/alibabacloud_pai_eas_endpoint/", "description": "[Machine Learning Platform for AI of Alibaba", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "7268", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"alibabacloud_pai_eas_endpoint\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:04 GMT", "etag": "W/\"53975567ba40507f0880b93a5ef0f6f4\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::m82k4-1713753604451-65fc22dc5905" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/alibabacloud_pai_eas_endpoint/", "property": "og:url" }, { "content": "Alibaba Cloud PAI EAS | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "[Machine Learning Platform for AI of Alibaba", "property": "og:description" } ], "title": "Alibaba Cloud PAI EAS | 🦜️🔗 LangChain" }
Alibaba Cloud PAI EAS Machine Learning Platform for AI of Alibaba Cloud is a machine learning or deep learning engineering platform intended for enterprises and developers. It provides easy-to-use, cost-effective, high-performance, and easy-to-scale plug-ins that can be applied to various industry scenarios. With over 140 built-in optimization algorithms, Machine Learning Platform for AI provides whole-process AI engineering capabilities including data labeling (PAI-iTAG), model building (PAI-Designer and PAI-DSW), model training (PAI-DLC), compilation optimization, and inference deployment (PAI-EAS). PAI-EAS supports different types of hardware resources, including CPUs and GPUs, and features high throughput and low latency. It allows you to deploy large-scale complex models with a few clicks and perform elastic scale-ins and scale-outs in real time. It also provides a comprehensive O&M and monitoring system. from langchain.chains import LLMChain from langchain_community.llms.pai_eas_endpoint import PaiEasEndpoint from langchain_core.prompts import PromptTemplate template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate.from_template(template) One who wants to use EAS LLMs must set up EAS service first. When the EAS service is launched, EAS_SERVICE_URL and EAS_SERVICE_TOKEN can be obtained. Users can refer to https://www.alibabacloud.com/help/en/pai/user-guide/service-deployment/ for more information, import os os.environ["EAS_SERVICE_URL"] = "Your_EAS_Service_URL" os.environ["EAS_SERVICE_TOKEN"] = "Your_EAS_Service_Token" llm = PaiEasEndpoint( eas_service_url=os.environ["EAS_SERVICE_URL"], eas_service_token=os.environ["EAS_SERVICE_TOKEN"], ) llm_chain = prompt | llm question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.invoke({"question": question}) ' Thank you for asking! However, I must respectfully point out that the question contains an error. Justin Bieber was born in 1994, and the Super Bowl was first played in 1967. Therefore, it is not possible for any NFL team to have won the Super Bowl in the year Justin Bieber was born.\n\nI hope this clarifies things! If you have any other questions, please feel free to ask.' Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/llms/aphrodite/
## Aphrodite Engine [Aphrodite](https://github.com/PygmalionAI/aphrodite-engine) is the open-source large-scale inference engine designed to serve thousands of users on the [PygmalionAI](https://pygmalion.chat/) website. * Attention mechanism by vLLM for fast throughput and low latencies * Support for for many SOTA sampling methods * Exllamav2 GPTQ kernels for better throughput at lower batch sizes This notebooks goes over how to use a LLM with langchain and Aphrodite. To use, you should have the `aphrodite-engine` python package installed. ``` %pip install --upgrade --quiet aphrodite-engine==0.4.2# %pip list | grep aphrodite ``` ``` from langchain_community.llms import Aphroditellm = Aphrodite( model="PygmalionAI/pygmalion-2-7b", trust_remote_code=True, # mandatory for hf models max_tokens=128, temperature=1.2, min_p=0.05, mirostat_mode=0, # change to 2 to use mirostat mirostat_tau=5.0, mirostat_eta=0.1,)print( llm( '<|system|>Enter RP mode. You are Ayumu "Osaka" Kasuga.<|user|>Hey Osaka. Tell me about yourself.<|model|>' )) ``` ``` INFO 12-15 11:52:48 aphrodite_engine.py:73] Initializing the Aphrodite Engine with the following config:INFO 12-15 11:52:48 aphrodite_engine.py:73] Model = 'PygmalionAI/pygmalion-2-7b'INFO 12-15 11:52:48 aphrodite_engine.py:73] Tokenizer = 'PygmalionAI/pygmalion-2-7b'INFO 12-15 11:52:48 aphrodite_engine.py:73] tokenizer_mode = autoINFO 12-15 11:52:48 aphrodite_engine.py:73] revision = NoneINFO 12-15 11:52:48 aphrodite_engine.py:73] trust_remote_code = TrueINFO 12-15 11:52:48 aphrodite_engine.py:73] DataType = torch.bfloat16INFO 12-15 11:52:48 aphrodite_engine.py:73] Download Directory = NoneINFO 12-15 11:52:48 aphrodite_engine.py:73] Model Load Format = autoINFO 12-15 11:52:48 aphrodite_engine.py:73] Number of GPUs = 1INFO 12-15 11:52:48 aphrodite_engine.py:73] Quantization Format = NoneINFO 12-15 11:52:48 aphrodite_engine.py:73] Sampler Seed = 0INFO 12-15 11:52:48 aphrodite_engine.py:73] Context Length = 4096INFO 12-15 11:54:07 aphrodite_engine.py:206] # GPU blocks: 3826, # CPU blocks: 512I'm Ayumu "Osaka" Kasuga, and I'm an avid anime and manga fan! I'm pretty introverted, but I've always loved reading books, watching anime and manga, and learning about Japanese culture. My favourite anime series would be My Hero Academia, Attack on Titan, and Sword Art Online. I also really enjoy reading the manga series One Piece, Naruto, and the Gintama series. ``` ``` Processed prompts: 100%|██████████| 1/1 [00:02<00:00, 2.91s/it] ``` ## Integrate the model in an LLMChain[​](#integrate-the-model-in-an-llmchain "Direct link to Integrate the model in an LLMChain") ``` from langchain.chains import LLMChainfrom langchain_core.prompts import PromptTemplatetemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "Who was the US president in the year the first Pokemon game was released?"print(llm_chain.run(question)) ``` ``` Processed prompts: 100%|██████████| 1/1 [00:03<00:00, 3.56s/it] ``` ``` The first Pokemon game was released in Japan on 27 February 1996 (their release dates differ from ours) and it is known as Red and Green. President Bill Clinton was in the White House in the years of 1993, 1994, 1995 and 1996 so this fits.Answer: Let's think step by step.The first Pokémon game was released in Japan on February 27, 1996 (their release dates differ from ours) and it is known as ``` ## Distributed Inference[​](#distributed-inference "Direct link to Distributed Inference") Aphrodite supports distributed tensor-parallel inference and serving. To run multi-GPU inference with the LLM class, set the `tensor_parallel_size` argument to the number of GPUs you want to use. For example, to run inference on 4 GPUs ``` from langchain_community.llms import Aphroditellm = Aphrodite( model="PygmalionAI/mythalion-13b", tensor_parallel_size=4, trust_remote_code=True, # mandatory for hf models)llm("What is the future of AI?") ``` ``` 2023-12-15 11:41:27,790 INFO worker.py:1636 -- Started a local Ray instance.Processed prompts: 100%|██████████| 1/1 [00:16<00:00, 16.09s/it] ``` ``` INFO 12-15 11:41:35 aphrodite_engine.py:73] Initializing the Aphrodite Engine with the following config:INFO 12-15 11:41:35 aphrodite_engine.py:73] Model = 'PygmalionAI/mythalion-13b'INFO 12-15 11:41:35 aphrodite_engine.py:73] Tokenizer = 'PygmalionAI/mythalion-13b'INFO 12-15 11:41:35 aphrodite_engine.py:73] tokenizer_mode = autoINFO 12-15 11:41:35 aphrodite_engine.py:73] revision = NoneINFO 12-15 11:41:35 aphrodite_engine.py:73] trust_remote_code = TrueINFO 12-15 11:41:35 aphrodite_engine.py:73] DataType = torch.float16INFO 12-15 11:41:35 aphrodite_engine.py:73] Download Directory = NoneINFO 12-15 11:41:35 aphrodite_engine.py:73] Model Load Format = autoINFO 12-15 11:41:35 aphrodite_engine.py:73] Number of GPUs = 4INFO 12-15 11:41:35 aphrodite_engine.py:73] Quantization Format = NoneINFO 12-15 11:41:35 aphrodite_engine.py:73] Sampler Seed = 0INFO 12-15 11:41:35 aphrodite_engine.py:73] Context Length = 4096INFO 12-15 11:43:58 aphrodite_engine.py:206] # GPU blocks: 11902, # CPU blocks: 1310 ``` ``` "\n2 years ago StockBot101\nAI is becoming increasingly real and more and more powerful with every year. But what does the future hold for artificial intelligence?\nThere are many possibilities for how AI could evolve and change our world. Some believe that AI will become so advanced that it will take over human jobs, while others believe that AI will be used to augment and assist human workers. There is also the possibility that AI could develop its own consciousness and become self-aware.\nWhatever the future holds, it is clear that AI will continue to play an important role in our lives. Technologies such as machine learning and natural language processing are already transforming industries like healthcare, manufacturing, and transportation. And as AI continues to develop, we can expect even more disruption and innovation across all sectors of the economy.\nSo what exactly are we looking at? What's the future of AI?\nIn the next few years, we can expect AI to be used more and more in healthcare. With the power of machine learning, artificial intelligence can help doctors diagnose diseases earlier and more accurately. It can also be used to develop new treatments and personalize care plans for individual patients.\nManufacturing is another area where AI is already having a big impact. Companies are using robotics and automation to build products faster and with fewer errors. And as AI continues to advance, we can expect even more changes in manufacturing, such as the development of self-driving factories.\nTransportation is another industry that is being transformed by artificial intelligence. Self-driving cars are already being tested on public roads, and it's likely that they will become commonplace in the next decade or so. AI-powered drones are also being developed for use in delivery and even firefighting.\nFinally, artificial intelligence is also poised to have a big impact on customer service and sales. Chatbots and virtual assistants will become more sophisticated, making it easier for businesses to communicate with customers and sell their products.\nThis is just the beginning for artificial intelligence. As the technology continues to develop, we can expect even more amazing advances and innovations. The future of AI is truly limitless.\nWhat do you think the future of AI holds? Do you see any other major" ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:05.122Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/aphrodite/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/aphrodite/", "description": "Aphrodite is the", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3494", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"aphrodite\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:04 GMT", "etag": "W/\"2d27af6cca7a412911cc41121d8662aa\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::757mv-1713753604704-0d82951e7a1d" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/aphrodite/", "property": "og:url" }, { "content": "Aphrodite Engine | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Aphrodite is the", "property": "og:description" } ], "title": "Aphrodite Engine | 🦜️🔗 LangChain" }
Aphrodite Engine Aphrodite is the open-source large-scale inference engine designed to serve thousands of users on the PygmalionAI website. Attention mechanism by vLLM for fast throughput and low latencies Support for for many SOTA sampling methods Exllamav2 GPTQ kernels for better throughput at lower batch sizes This notebooks goes over how to use a LLM with langchain and Aphrodite. To use, you should have the aphrodite-engine python package installed. %pip install --upgrade --quiet aphrodite-engine==0.4.2 # %pip list | grep aphrodite from langchain_community.llms import Aphrodite llm = Aphrodite( model="PygmalionAI/pygmalion-2-7b", trust_remote_code=True, # mandatory for hf models max_tokens=128, temperature=1.2, min_p=0.05, mirostat_mode=0, # change to 2 to use mirostat mirostat_tau=5.0, mirostat_eta=0.1, ) print( llm( '<|system|>Enter RP mode. You are Ayumu "Osaka" Kasuga.<|user|>Hey Osaka. Tell me about yourself.<|model|>' ) ) INFO 12-15 11:52:48 aphrodite_engine.py:73] Initializing the Aphrodite Engine with the following config: INFO 12-15 11:52:48 aphrodite_engine.py:73] Model = 'PygmalionAI/pygmalion-2-7b' INFO 12-15 11:52:48 aphrodite_engine.py:73] Tokenizer = 'PygmalionAI/pygmalion-2-7b' INFO 12-15 11:52:48 aphrodite_engine.py:73] tokenizer_mode = auto INFO 12-15 11:52:48 aphrodite_engine.py:73] revision = None INFO 12-15 11:52:48 aphrodite_engine.py:73] trust_remote_code = True INFO 12-15 11:52:48 aphrodite_engine.py:73] DataType = torch.bfloat16 INFO 12-15 11:52:48 aphrodite_engine.py:73] Download Directory = None INFO 12-15 11:52:48 aphrodite_engine.py:73] Model Load Format = auto INFO 12-15 11:52:48 aphrodite_engine.py:73] Number of GPUs = 1 INFO 12-15 11:52:48 aphrodite_engine.py:73] Quantization Format = None INFO 12-15 11:52:48 aphrodite_engine.py:73] Sampler Seed = 0 INFO 12-15 11:52:48 aphrodite_engine.py:73] Context Length = 4096 INFO 12-15 11:54:07 aphrodite_engine.py:206] # GPU blocks: 3826, # CPU blocks: 512 I'm Ayumu "Osaka" Kasuga, and I'm an avid anime and manga fan! I'm pretty introverted, but I've always loved reading books, watching anime and manga, and learning about Japanese culture. My favourite anime series would be My Hero Academia, Attack on Titan, and Sword Art Online. I also really enjoy reading the manga series One Piece, Naruto, and the Gintama series. Processed prompts: 100%|██████████| 1/1 [00:02<00:00, 2.91s/it] Integrate the model in an LLMChain​ from langchain.chains import LLMChain from langchain_core.prompts import PromptTemplate template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate.from_template(template) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "Who was the US president in the year the first Pokemon game was released?" print(llm_chain.run(question)) Processed prompts: 100%|██████████| 1/1 [00:03<00:00, 3.56s/it] The first Pokemon game was released in Japan on 27 February 1996 (their release dates differ from ours) and it is known as Red and Green. President Bill Clinton was in the White House in the years of 1993, 1994, 1995 and 1996 so this fits. Answer: Let's think step by step. The first Pokémon game was released in Japan on February 27, 1996 (their release dates differ from ours) and it is known as Distributed Inference​ Aphrodite supports distributed tensor-parallel inference and serving. To run multi-GPU inference with the LLM class, set the tensor_parallel_size argument to the number of GPUs you want to use. For example, to run inference on 4 GPUs from langchain_community.llms import Aphrodite llm = Aphrodite( model="PygmalionAI/mythalion-13b", tensor_parallel_size=4, trust_remote_code=True, # mandatory for hf models ) llm("What is the future of AI?") 2023-12-15 11:41:27,790 INFO worker.py:1636 -- Started a local Ray instance. Processed prompts: 100%|██████████| 1/1 [00:16<00:00, 16.09s/it] INFO 12-15 11:41:35 aphrodite_engine.py:73] Initializing the Aphrodite Engine with the following config: INFO 12-15 11:41:35 aphrodite_engine.py:73] Model = 'PygmalionAI/mythalion-13b' INFO 12-15 11:41:35 aphrodite_engine.py:73] Tokenizer = 'PygmalionAI/mythalion-13b' INFO 12-15 11:41:35 aphrodite_engine.py:73] tokenizer_mode = auto INFO 12-15 11:41:35 aphrodite_engine.py:73] revision = None INFO 12-15 11:41:35 aphrodite_engine.py:73] trust_remote_code = True INFO 12-15 11:41:35 aphrodite_engine.py:73] DataType = torch.float16 INFO 12-15 11:41:35 aphrodite_engine.py:73] Download Directory = None INFO 12-15 11:41:35 aphrodite_engine.py:73] Model Load Format = auto INFO 12-15 11:41:35 aphrodite_engine.py:73] Number of GPUs = 4 INFO 12-15 11:41:35 aphrodite_engine.py:73] Quantization Format = None INFO 12-15 11:41:35 aphrodite_engine.py:73] Sampler Seed = 0 INFO 12-15 11:41:35 aphrodite_engine.py:73] Context Length = 4096 INFO 12-15 11:43:58 aphrodite_engine.py:206] # GPU blocks: 11902, # CPU blocks: 1310 "\n2 years ago StockBot101\nAI is becoming increasingly real and more and more powerful with every year. But what does the future hold for artificial intelligence?\nThere are many possibilities for how AI could evolve and change our world. Some believe that AI will become so advanced that it will take over human jobs, while others believe that AI will be used to augment and assist human workers. There is also the possibility that AI could develop its own consciousness and become self-aware.\nWhatever the future holds, it is clear that AI will continue to play an important role in our lives. Technologies such as machine learning and natural language processing are already transforming industries like healthcare, manufacturing, and transportation. And as AI continues to develop, we can expect even more disruption and innovation across all sectors of the economy.\nSo what exactly are we looking at? What's the future of AI?\nIn the next few years, we can expect AI to be used more and more in healthcare. With the power of machine learning, artificial intelligence can help doctors diagnose diseases earlier and more accurately. It can also be used to develop new treatments and personalize care plans for individual patients.\nManufacturing is another area where AI is already having a big impact. Companies are using robotics and automation to build products faster and with fewer errors. And as AI continues to advance, we can expect even more changes in manufacturing, such as the development of self-driving factories.\nTransportation is another industry that is being transformed by artificial intelligence. Self-driving cars are already being tested on public roads, and it's likely that they will become commonplace in the next decade or so. AI-powered drones are also being developed for use in delivery and even firefighting.\nFinally, artificial intelligence is also poised to have a big impact on customer service and sales. Chatbots and virtual assistants will become more sophisticated, making it easier for businesses to communicate with customers and sell their products.\nThis is just the beginning for artificial intelligence. As the technology continues to develop, we can expect even more amazing advances and innovations. The future of AI is truly limitless.\nWhat do you think the future of AI holds? Do you see any other major"
https://python.langchain.com/docs/integrations/llms/anyscale/
## Anyscale [Anyscale](https://www.anyscale.com/) is a fully-managed [Ray](https://www.ray.io/) platform, on which you can build, deploy, and manage scalable AI and Python applications This example goes over how to use LangChain to interact with [Anyscale Endpoint](https://app.endpoints.anyscale.com/). ``` ANYSCALE_API_BASE = "..."ANYSCALE_API_KEY = "..."ANYSCALE_MODEL_NAME = "..." ``` ``` import osos.environ["ANYSCALE_API_BASE"] = ANYSCALE_API_BASEos.environ["ANYSCALE_API_KEY"] = ANYSCALE_API_KEY ``` ``` from langchain.chains import LLMChainfrom langchain_community.llms import Anyscalefrom langchain_core.prompts import PromptTemplate ``` ``` template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template) ``` ``` llm = Anyscale(model_name=ANYSCALE_MODEL_NAME) ``` ``` question = "When was George Washington president?"llm_chain.invoke({"question": question}) ``` With Ray, we can distribute the queries without asynchronized implementation. This not only applies to Anyscale LLM model, but to any other Langchain LLM models which do not have `_acall` or `_agenerate` implemented ``` prompt_list = [ "When was George Washington president?", "Explain to me the difference between nuclear fission and fusion.", "Give me a list of 5 science fiction books I should read next.", "Explain the difference between Spark and Ray.", "Suggest some fun holiday ideas.", "Tell a joke.", "What is 2+2?", "Explain what is machine learning like I am five years old.", "Explain what is artifical intelligence.",] ``` ``` import ray@ray.remote(num_cpus=0.1)def send_query(llm, prompt): resp = llm(prompt) return respfutures = [send_query.remote(llm, prompt) for prompt in prompt_list]results = ray.get(futures) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:05.767Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/anyscale/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/anyscale/", "description": "Anyscale is a fully-managed", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3494", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"anyscale\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:04 GMT", "etag": "W/\"9fddd6e58ca6cc74c8ddf8a363a14d67\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::pwqcj-1713753604883-c18aa104f5ac" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/anyscale/", "property": "og:url" }, { "content": "Anyscale | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Anyscale is a fully-managed", "property": "og:description" } ], "title": "Anyscale | 🦜️🔗 LangChain" }
Anyscale Anyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applications This example goes over how to use LangChain to interact with Anyscale Endpoint. ANYSCALE_API_BASE = "..." ANYSCALE_API_KEY = "..." ANYSCALE_MODEL_NAME = "..." import os os.environ["ANYSCALE_API_BASE"] = ANYSCALE_API_BASE os.environ["ANYSCALE_API_KEY"] = ANYSCALE_API_KEY from langchain.chains import LLMChain from langchain_community.llms import Anyscale from langchain_core.prompts import PromptTemplate template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate.from_template(template) llm = Anyscale(model_name=ANYSCALE_MODEL_NAME) question = "When was George Washington president?" llm_chain.invoke({"question": question}) With Ray, we can distribute the queries without asynchronized implementation. This not only applies to Anyscale LLM model, but to any other Langchain LLM models which do not have _acall or _agenerate implemented prompt_list = [ "When was George Washington president?", "Explain to me the difference between nuclear fission and fusion.", "Give me a list of 5 science fiction books I should read next.", "Explain the difference between Spark and Ray.", "Suggest some fun holiday ideas.", "Tell a joke.", "What is 2+2?", "Explain what is machine learning like I am five years old.", "Explain what is artifical intelligence.", ] import ray @ray.remote(num_cpus=0.1) def send_query(llm, prompt): resp = llm(prompt) return resp futures = [send_query.remote(llm, prompt) for prompt in prompt_list] results = ray.get(futures)
https://python.langchain.com/docs/integrations/llms/azure_ml/
## Azure ML [Azure ML](https://azure.microsoft.com/en-us/products/machine-learning/) is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides foundational and general purpose models from different providers. This notebook goes over how to use an LLM hosted on an `Azure ML Online Endpoint`. ``` from langchain_community.llms.azureml_endpoint import AzureMLOnlineEndpoint ``` ## Set up[​](#set-up "Direct link to Set up") You must [deploy a model on Azure ML](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-foundation-models?view=azureml-api-2#deploying-foundation-models-to-endpoints-for-inferencing) or [to Azure AI studio](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-open) and obtain the following parameters: * `endpoint_url`: The REST endpoint url provided by the endpoint. * `endpoint_api_type`: Use `endpoint_type='dedicated'` when deploying models to **Dedicated endpoints** (hosted managed infrastructure). Use `endpoint_type='serverless'` when deploying models using the **Pay-as-you-go** offering (model as a service). * `endpoint_api_key`: The API key provided by the endpoint. * `deployment_name`: (Optional) The deployment name of the model using the endpoint. ## Content Formatter[​](#content-formatter "Direct link to Content Formatter") The `content_formatter` parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a `ContentFormatterBase` class is provided to allow users to transform data to their liking. The following content formatters are provided: * `GPT2ContentFormatter`: Formats request and response data for GPT2 * `DollyContentFormatter`: Formats request and response data for the Dolly-v2 * `HFContentFormatter`: Formats request and response data for text-generation Hugging Face models * `CustomOpenAIContentFormatter`: Formats request and response data for models like LLaMa2 that follow OpenAI API compatible scheme. _Note: `OSSContentFormatter` is being deprecated and replaced with `GPT2ContentFormatter`. The logic is the same but `GPT2ContentFormatter` is a more suitable name. You can still continue to use `OSSContentFormatter` as the changes are backwards compatible._ ## Examples[​](#examples "Direct link to Examples") ### Example: LlaMa 2 completions with real-time endpoints[​](#example-llama-2-completions-with-real-time-endpoints "Direct link to Example: LlaMa 2 completions with real-time endpoints") ``` from langchain_community.llms.azureml_endpoint import ( AzureMLEndpointApiType, CustomOpenAIContentFormatter,)from langchain_core.messages import HumanMessagellm = AzureMLOnlineEndpoint( endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/score", endpoint_api_type=AzureMLEndpointApiType.dedicated, endpoint_api_key="my-api-key", content_formatter=CustomOpenAIContentFormatter(), model_kwargs={"temperature": 0.8, "max_new_tokens": 400},)response = llm.invoke("Write me a song about sparkling water:")response ``` Model parameters can also be indicated during invocation: ``` response = llm.invoke("Write me a song about sparkling water:", temperature=0.5)response ``` ### Example: Chat completions with pay-as-you-go deployments (model as a service)[​](#example-chat-completions-with-pay-as-you-go-deployments-model-as-a-service "Direct link to Example: Chat completions with pay-as-you-go deployments (model as a service)") ``` from langchain_community.llms.azureml_endpoint import ( AzureMLEndpointApiType, CustomOpenAIContentFormatter,)from langchain_core.messages import HumanMessagellm = AzureMLOnlineEndpoint( endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/v1/completions", endpoint_api_type=AzureMLEndpointApiType.serverless, endpoint_api_key="my-api-key", content_formatter=CustomOpenAIContentFormatter(), model_kwargs={"temperature": 0.8, "max_new_tokens": 400},)response = llm.invoke("Write me a song about sparkling water:")response ``` ### Example: Custom content formatter[​](#example-custom-content-formatter "Direct link to Example: Custom content formatter") Below is an example using a summarization model from Hugging Face. ``` import jsonimport osfrom typing import Dictfrom langchain_community.llms.azureml_endpoint import ( AzureMLOnlineEndpoint, ContentFormatterBase,)class CustomFormatter(ContentFormatterBase): content_type = "application/json" accepts = "application/json" def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes: input_str = json.dumps( { "inputs": [prompt], "parameters": model_kwargs, "options": {"use_cache": False, "wait_for_model": True}, } ) return str.encode(input_str) def format_response_payload(self, output: bytes) -> str: response_json = json.loads(output) return response_json[0]["summary_text"]content_formatter = CustomFormatter()llm = AzureMLOnlineEndpoint( endpoint_api_type="dedicated", endpoint_api_key=os.getenv("BART_ENDPOINT_API_KEY"), endpoint_url=os.getenv("BART_ENDPOINT_URL"), model_kwargs={"temperature": 0.8, "max_new_tokens": 400}, content_formatter=content_formatter,)large_text = """On January 7, 2020, Blockberry Creative announced that HaSeul would not participate in the promotion for Loona's next album because of mental health concerns. She was said to be diagnosed with "intermittent anxiety symptoms" and would be taking time to focus on her health.[39] On February 5, 2020, Loona released their second EP titled [#] (read as hash), along with the title track "So What".[40] Although HaSeul did not appear in the title track, her vocals are featured on three other songs on the album, including "365". Once peaked at number 1 on the daily Gaon Retail Album Chart,[41] the EP then debuted at number 2 on the weekly Gaon Album Chart. On March 12, 2020, Loona won their first music show trophy with "So What" on Mnet's M Countdown.[42]On October 19, 2020, Loona released their third EP titled [12:00] (read as midnight),[43] accompanied by its first single "Why Not?". HaSeul was again not involved in the album, out of her own decision to focus on the recovery of her health.[44] The EP then became their first album to enter the Billboard 200, debuting at number 112.[45] On November 18, Loona released the music video for "Star", another song on [12:00].[46] Peaking at number 40, "Star" is Loona's first entry on the Billboard Mainstream Top 40, making them the second K-pop girl group to enter the chart.[47]On June 1, 2021, Loona announced that they would be having a comeback on June 28, with their fourth EP, [&] (read as and).[48] The following day, on June 2, a teaser was posted to Loona's official social media accounts showing twelve sets of eyes, confirming the return of member HaSeul who had been on hiatus since early 2020.[49] On June 12, group members YeoJin, Kim Lip, Choerry, and Go Won released the song "Yum-Yum" as a collaboration with Cocomong.[50] On September 8, they released another collaboration song named "Yummy-Yummy".[51] On June 27, 2021, Loona announced at the end of their special clip that they are making their Japanese debut on September 15 under Universal Music Japan sublabel EMI Records.[52] On August 27, it was announced that Loona will release the double A-side single, "Hula Hoop / Star Seed" on September 15, with a physical CD release on October 20.[53] In December, Chuu filed an injunction to suspend her exclusive contract with Blockberry Creative.[54][55]"""summarized_text = llm.invoke(large_text)print(summarized_text) ``` ### Example: Dolly with LLMChain[​](#example-dolly-with-llmchain "Direct link to Example: Dolly with LLMChain") ``` from langchain.chains import LLMChainfrom langchain_community.llms.azureml_endpoint import DollyContentFormatterfrom langchain_core.prompts import PromptTemplateformatter_template = "Write a {word_count} word essay about {topic}."prompt = PromptTemplate( input_variables=["word_count", "topic"], template=formatter_template)content_formatter = DollyContentFormatter()llm = AzureMLOnlineEndpoint( endpoint_api_key=os.getenv("DOLLY_ENDPOINT_API_KEY"), endpoint_url=os.getenv("DOLLY_ENDPOINT_URL"), model_kwargs={"temperature": 0.8, "max_tokens": 300}, content_formatter=content_formatter,)chain = LLMChain(llm=llm, prompt=prompt)print(chain.invoke({"word_count": 100, "topic": "how to make friends"})) ``` ## Serializing an LLM[​](#serializing-an-llm "Direct link to Serializing an LLM") You can also save and load LLM configurations ``` from langchain_community.llms.loading import load_llmsave_llm = AzureMLOnlineEndpoint( deployment_name="databricks-dolly-v2-12b-4", model_kwargs={ "temperature": 0.2, "max_tokens": 150, "top_p": 0.8, "frequency_penalty": 0.32, "presence_penalty": 72e-3, },)save_llm.save("azureml.json")loaded_llm = load_llm("azureml.json")print(loaded_llm) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:05.989Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/azure_ml/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/azure_ml/", "description": "Azure ML", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3494", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"azure_ml\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:04 GMT", "etag": "W/\"e7b84482dd8d3e4aca73a3cb7ac19a27\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::zmgp6-1713753604841-cf083e6c4217" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/azure_ml/", "property": "og:url" }, { "content": "Azure ML | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Azure ML", "property": "og:description" } ], "title": "Azure ML | 🦜️🔗 LangChain" }
Azure ML Azure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides foundational and general purpose models from different providers. This notebook goes over how to use an LLM hosted on an Azure ML Online Endpoint. from langchain_community.llms.azureml_endpoint import AzureMLOnlineEndpoint Set up​ You must deploy a model on Azure ML or to Azure AI studio and obtain the following parameters: endpoint_url: The REST endpoint url provided by the endpoint. endpoint_api_type: Use endpoint_type='dedicated' when deploying models to Dedicated endpoints (hosted managed infrastructure). Use endpoint_type='serverless' when deploying models using the Pay-as-you-go offering (model as a service). endpoint_api_key: The API key provided by the endpoint. deployment_name: (Optional) The deployment name of the model using the endpoint. Content Formatter​ The content_formatter parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a ContentFormatterBase class is provided to allow users to transform data to their liking. The following content formatters are provided: GPT2ContentFormatter: Formats request and response data for GPT2 DollyContentFormatter: Formats request and response data for the Dolly-v2 HFContentFormatter: Formats request and response data for text-generation Hugging Face models CustomOpenAIContentFormatter: Formats request and response data for models like LLaMa2 that follow OpenAI API compatible scheme. Note: OSSContentFormatter is being deprecated and replaced with GPT2ContentFormatter. The logic is the same but GPT2ContentFormatter is a more suitable name. You can still continue to use OSSContentFormatter as the changes are backwards compatible. Examples​ Example: LlaMa 2 completions with real-time endpoints​ from langchain_community.llms.azureml_endpoint import ( AzureMLEndpointApiType, CustomOpenAIContentFormatter, ) from langchain_core.messages import HumanMessage llm = AzureMLOnlineEndpoint( endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/score", endpoint_api_type=AzureMLEndpointApiType.dedicated, endpoint_api_key="my-api-key", content_formatter=CustomOpenAIContentFormatter(), model_kwargs={"temperature": 0.8, "max_new_tokens": 400}, ) response = llm.invoke("Write me a song about sparkling water:") response Model parameters can also be indicated during invocation: response = llm.invoke("Write me a song about sparkling water:", temperature=0.5) response Example: Chat completions with pay-as-you-go deployments (model as a service)​ from langchain_community.llms.azureml_endpoint import ( AzureMLEndpointApiType, CustomOpenAIContentFormatter, ) from langchain_core.messages import HumanMessage llm = AzureMLOnlineEndpoint( endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/v1/completions", endpoint_api_type=AzureMLEndpointApiType.serverless, endpoint_api_key="my-api-key", content_formatter=CustomOpenAIContentFormatter(), model_kwargs={"temperature": 0.8, "max_new_tokens": 400}, ) response = llm.invoke("Write me a song about sparkling water:") response Example: Custom content formatter​ Below is an example using a summarization model from Hugging Face. import json import os from typing import Dict from langchain_community.llms.azureml_endpoint import ( AzureMLOnlineEndpoint, ContentFormatterBase, ) class CustomFormatter(ContentFormatterBase): content_type = "application/json" accepts = "application/json" def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes: input_str = json.dumps( { "inputs": [prompt], "parameters": model_kwargs, "options": {"use_cache": False, "wait_for_model": True}, } ) return str.encode(input_str) def format_response_payload(self, output: bytes) -> str: response_json = json.loads(output) return response_json[0]["summary_text"] content_formatter = CustomFormatter() llm = AzureMLOnlineEndpoint( endpoint_api_type="dedicated", endpoint_api_key=os.getenv("BART_ENDPOINT_API_KEY"), endpoint_url=os.getenv("BART_ENDPOINT_URL"), model_kwargs={"temperature": 0.8, "max_new_tokens": 400}, content_formatter=content_formatter, ) large_text = """On January 7, 2020, Blockberry Creative announced that HaSeul would not participate in the promotion for Loona's next album because of mental health concerns. She was said to be diagnosed with "intermittent anxiety symptoms" and would be taking time to focus on her health.[39] On February 5, 2020, Loona released their second EP titled [#] (read as hash), along with the title track "So What".[40] Although HaSeul did not appear in the title track, her vocals are featured on three other songs on the album, including "365". Once peaked at number 1 on the daily Gaon Retail Album Chart,[41] the EP then debuted at number 2 on the weekly Gaon Album Chart. On March 12, 2020, Loona won their first music show trophy with "So What" on Mnet's M Countdown.[42] On October 19, 2020, Loona released their third EP titled [12:00] (read as midnight),[43] accompanied by its first single "Why Not?". HaSeul was again not involved in the album, out of her own decision to focus on the recovery of her health.[44] The EP then became their first album to enter the Billboard 200, debuting at number 112.[45] On November 18, Loona released the music video for "Star", another song on [12:00].[46] Peaking at number 40, "Star" is Loona's first entry on the Billboard Mainstream Top 40, making them the second K-pop girl group to enter the chart.[47] On June 1, 2021, Loona announced that they would be having a comeback on June 28, with their fourth EP, [&] (read as and). [48] The following day, on June 2, a teaser was posted to Loona's official social media accounts showing twelve sets of eyes, confirming the return of member HaSeul who had been on hiatus since early 2020.[49] On June 12, group members YeoJin, Kim Lip, Choerry, and Go Won released the song "Yum-Yum" as a collaboration with Cocomong.[50] On September 8, they released another collaboration song named "Yummy-Yummy".[51] On June 27, 2021, Loona announced at the end of their special clip that they are making their Japanese debut on September 15 under Universal Music Japan sublabel EMI Records.[52] On August 27, it was announced that Loona will release the double A-side single, "Hula Hoop / Star Seed" on September 15, with a physical CD release on October 20.[53] In December, Chuu filed an injunction to suspend her exclusive contract with Blockberry Creative.[54][55] """ summarized_text = llm.invoke(large_text) print(summarized_text) Example: Dolly with LLMChain​ from langchain.chains import LLMChain from langchain_community.llms.azureml_endpoint import DollyContentFormatter from langchain_core.prompts import PromptTemplate formatter_template = "Write a {word_count} word essay about {topic}." prompt = PromptTemplate( input_variables=["word_count", "topic"], template=formatter_template ) content_formatter = DollyContentFormatter() llm = AzureMLOnlineEndpoint( endpoint_api_key=os.getenv("DOLLY_ENDPOINT_API_KEY"), endpoint_url=os.getenv("DOLLY_ENDPOINT_URL"), model_kwargs={"temperature": 0.8, "max_tokens": 300}, content_formatter=content_formatter, ) chain = LLMChain(llm=llm, prompt=prompt) print(chain.invoke({"word_count": 100, "topic": "how to make friends"})) Serializing an LLM​ You can also save and load LLM configurations from langchain_community.llms.loading import load_llm save_llm = AzureMLOnlineEndpoint( deployment_name="databricks-dolly-v2-12b-4", model_kwargs={ "temperature": 0.2, "max_tokens": 150, "top_p": 0.8, "frequency_penalty": 0.32, "presence_penalty": 72e-3, }, ) save_llm.save("azureml.json") loaded_llm = load_llm("azureml.json") print(loaded_llm)
https://python.langchain.com/docs/integrations/llms/azure_openai/
## Azure OpenAI This notebook goes over how to use Langchain with [Azure OpenAI](https://aka.ms/azure-openai). The Azure OpenAI API is compatible with OpenAI’s API. The `openai` Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below. ## API configuration[​](#api-configuration "Direct link to API configuration") You can configure the `openai` package to use Azure OpenAI using environment variables. The following is for `bash`: ``` # The API version you want to use: set this to `2023-12-01-preview` for the released version.export OPENAI_API_VERSION=2023-12-01-preview# The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.export AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com# The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.export AZURE_OPENAI_API_KEY=<your Azure OpenAI API key> ``` Alternatively, you can configure the API right within your running Python environment: ``` import osos.environ["OPENAI_API_VERSION"] = "2023-12-01-preview" ``` ## Azure Active Directory Authentication[​](#azure-active-directory-authentication "Direct link to Azure Active Directory Authentication") There are two ways you can authenticate to Azure OpenAI: - API Key - Azure Active Directory (AAD) Using the API key is the easiest way to get started. You can find your API key in the Azure portal under your Azure OpenAI resource. However, if you have complex security requirements - you may want to use Azure Active Directory. You can find more information on how to use AAD with Azure OpenAI [here](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/managed-identity). If you are developing locally, you will need to have the Azure CLI installed and be logged in. You can install the Azure CLI [here](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli). Then, run `az login` to log in. Add a role an Azure role assignment `Cognitive Services OpenAI User` scoped to your Azure OpenAI resource. This will allow you to get a token from AAD to use with Azure OpenAI. You can grant this role assignment to a user, group, service principal, or managed identity. For more information about Azure OpenAI RBAC roles see [here](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/role-based-access-control). To use AAD in Python with LangChain, install the `azure-identity` package. Then, set `OPENAI_API_TYPE` to `azure_ad`. Next, use the `DefaultAzureCredential` class to get a token from AAD by calling `get_token` as shown below. Finally, set the `OPENAI_API_KEY` environment variable to the token value. ``` import osfrom azure.identity import DefaultAzureCredential# Get the Azure Credentialcredential = DefaultAzureCredential()# Set the API type to `azure_ad`os.environ["OPENAI_API_TYPE"] = "azure_ad"# Set the API_KEY to the token from the Azure credentialos.environ["OPENAI_API_KEY"] = credential.get_token("https://cognitiveservices.azure.com/.default").token ``` The `DefaultAzureCredential` class is an easy way to get started with AAD authentication. You can also customize the credential chain if necessary. In the example shown below, we first try Managed Identity, then fall back to the Azure CLI. This is useful if you are running your code in Azure, but want to develop locally. ``` from azure.identity import ChainedTokenCredential, ManagedIdentityCredential, AzureCliCredentialcredential = ChainedTokenCredential( ManagedIdentityCredential(), AzureCliCredential()) ``` ## Deployments[​](#deployments "Direct link to Deployments") With Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use. **\*Note**: These docs are for the Azure text completion models. Models like GPT-4 are chat models. They have a slightly different interface, and can be accessed via the `AzureChatOpenAI` class. For docs on Azure chat see [Azure Chat OpenAI documentation](https://python.langchain.com/docs/integrations/chat/azure_chat_openai/).\* Let’s say your deployment name is `gpt-35-turbo-instruct-prod`. In the `openai` Python API, you can specify this deployment with the `engine` parameter. For example: ``` import openaiclient = AzureOpenAI( api_version="2023-12-01-preview",)response = client.completions.create( model="gpt-35-turbo-instruct-prod", prompt="Test prompt") ``` ``` %pip install --upgrade --quiet langchain-openai ``` ``` import osos.environ["OPENAI_API_VERSION"] = "2023-12-01-preview"os.environ["AZURE_OPENAI_ENDPOINT"] = "..."os.environ["AZURE_OPENAI_API_KEY"] = "..." ``` ``` # Import Azure OpenAIfrom langchain_openai import AzureOpenAI ``` ``` # Create an instance of Azure OpenAI# Replace the deployment name with your ownllm = AzureOpenAI( deployment_name="gpt-35-turbo-instruct-0914",) ``` ``` # Run the LLMllm.invoke("Tell me a joke") ``` ``` " Why couldn't the bicycle stand up by itself?\n\nBecause it was two-tired!" ``` We can also print the LLM and see its custom print. ``` AzureOpenAIParams: {'deployment_name': 'gpt-35-turbo-instruct-0914', 'model_name': 'gpt-3.5-turbo-instruct', 'temperature': 0.7, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'logit_bias': {}, 'max_tokens': 256} ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:06.618Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/azure_openai/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/azure_openai/", "description": "This notebook goes over how to use Langchain with [Azure", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "8555", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"azure_openai\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:05 GMT", "etag": "W/\"999cda0215a5bd50df4a6bde66304e2b\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::wq92w-1713753605128-330c0e493fb1" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/azure_openai/", "property": "og:url" }, { "content": "Azure OpenAI | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebook goes over how to use Langchain with [Azure", "property": "og:description" } ], "title": "Azure OpenAI | 🦜️🔗 LangChain" }
Azure OpenAI This notebook goes over how to use Langchain with Azure OpenAI. The Azure OpenAI API is compatible with OpenAI’s API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below. API configuration​ You can configure the openai package to use Azure OpenAI using environment variables. The following is for bash: # The API version you want to use: set this to `2023-12-01-preview` for the released version. export OPENAI_API_VERSION=2023-12-01-preview # The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource. export AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com # The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource. export AZURE_OPENAI_API_KEY=<your Azure OpenAI API key> Alternatively, you can configure the API right within your running Python environment: import os os.environ["OPENAI_API_VERSION"] = "2023-12-01-preview" Azure Active Directory Authentication​ There are two ways you can authenticate to Azure OpenAI: - API Key - Azure Active Directory (AAD) Using the API key is the easiest way to get started. You can find your API key in the Azure portal under your Azure OpenAI resource. However, if you have complex security requirements - you may want to use Azure Active Directory. You can find more information on how to use AAD with Azure OpenAI here. If you are developing locally, you will need to have the Azure CLI installed and be logged in. You can install the Azure CLI here. Then, run az login to log in. Add a role an Azure role assignment Cognitive Services OpenAI User scoped to your Azure OpenAI resource. This will allow you to get a token from AAD to use with Azure OpenAI. You can grant this role assignment to a user, group, service principal, or managed identity. For more information about Azure OpenAI RBAC roles see here. To use AAD in Python with LangChain, install the azure-identity package. Then, set OPENAI_API_TYPE to azure_ad. Next, use the DefaultAzureCredential class to get a token from AAD by calling get_token as shown below. Finally, set the OPENAI_API_KEY environment variable to the token value. import os from azure.identity import DefaultAzureCredential # Get the Azure Credential credential = DefaultAzureCredential() # Set the API type to `azure_ad` os.environ["OPENAI_API_TYPE"] = "azure_ad" # Set the API_KEY to the token from the Azure credential os.environ["OPENAI_API_KEY"] = credential.get_token("https://cognitiveservices.azure.com/.default").token The DefaultAzureCredential class is an easy way to get started with AAD authentication. You can also customize the credential chain if necessary. In the example shown below, we first try Managed Identity, then fall back to the Azure CLI. This is useful if you are running your code in Azure, but want to develop locally. from azure.identity import ChainedTokenCredential, ManagedIdentityCredential, AzureCliCredential credential = ChainedTokenCredential( ManagedIdentityCredential(), AzureCliCredential() ) Deployments​ With Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use. *Note: These docs are for the Azure text completion models. Models like GPT-4 are chat models. They have a slightly different interface, and can be accessed via the AzureChatOpenAI class. For docs on Azure chat see Azure Chat OpenAI documentation.* Let’s say your deployment name is gpt-35-turbo-instruct-prod. In the openai Python API, you can specify this deployment with the engine parameter. For example: import openai client = AzureOpenAI( api_version="2023-12-01-preview", ) response = client.completions.create( model="gpt-35-turbo-instruct-prod", prompt="Test prompt" ) %pip install --upgrade --quiet langchain-openai import os os.environ["OPENAI_API_VERSION"] = "2023-12-01-preview" os.environ["AZURE_OPENAI_ENDPOINT"] = "..." os.environ["AZURE_OPENAI_API_KEY"] = "..." # Import Azure OpenAI from langchain_openai import AzureOpenAI # Create an instance of Azure OpenAI # Replace the deployment name with your own llm = AzureOpenAI( deployment_name="gpt-35-turbo-instruct-0914", ) # Run the LLM llm.invoke("Tell me a joke") " Why couldn't the bicycle stand up by itself?\n\nBecause it was two-tired!" We can also print the LLM and see its custom print. AzureOpenAI Params: {'deployment_name': 'gpt-35-turbo-instruct-0914', 'model_name': 'gpt-3.5-turbo-instruct', 'temperature': 0.7, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'logit_bias': {}, 'max_tokens': 256}
https://python.langchain.com/docs/integrations/graphs/memgraph/
## Memgraph > [Memgraph](https://github.com/memgraph/memgraph) is the open-source graph database, compatible with `Neo4j`. The database is using the `Cypher` graph query language, > > [Cypher](https://en.wikipedia.org/wiki/Cypher_(query_language)) is a declarative graph query language that allows for expressive and efficient data querying in a property graph. This notebook shows how to use LLMs to provide a natural language interface to a [Memgraph](https://github.com/memgraph/memgraph) database. ## Setting up[​](#setting-up "Direct link to Setting up") To complete this tutorial, you will need [Docker](https://www.docker.com/get-started/) and [Python 3.x](https://www.python.org/) installed. Ensure you have a running `Memgraph` instance. You can download and run it in a local Docker container by executing the following script: ``` docker run \ -it \ -p 7687:7687 \ -p 7444:7444 \ -p 3000:3000 \ -e MEMGRAPH="--bolt-server-name-for-init=Neo4j/" \ -v mg_lib:/var/lib/memgraph memgraph/memgraph-platform ``` You will need to wait a few seconds for the database to start. If the process is completed successfully, you should see something like this: ``` mgconsole X.XConnected to 'memgraph://127.0.0.1:7687'Type :help for shell usageQuit the shell by typing Ctrl-D(eof) or :quitmemgraph> ``` Now you can start playing with `Memgraph`! Begin by installing and importing all the necessary packages. We’ll use the package manager called [pip](https://pip.pypa.io/en/stable/installation/), along with the `--user` flag, to ensure proper permissions. If you’ve installed Python 3.4 or a later version, pip is included by default. You can install all the required packages using the following command: ``` pip install langchain langchain-openai neo4j gqlalchemy --user ``` You can either run the provided code blocks in this notebook or use a separate Python file to experiment with Memgraph and LangChain. ``` import osfrom gqlalchemy import Memgraphfrom langchain.chains import GraphCypherQAChainfrom langchain_community.graphs import MemgraphGraphfrom langchain_core.prompts import PromptTemplatefrom langchain_openai import ChatOpenAI ``` We’re utilizing the Python library [GQLAlchemy](https://github.com/memgraph/gqlalchemy) to establish a connection between our Memgraph database and Python script. To execute queries, we can set up a Memgraph instance as follows: ``` memgraph = Memgraph(host="127.0.0.1", port=7687) ``` ## Populating the database[​](#populating-the-database "Direct link to Populating the database") You can effortlessly populate your new, empty database using the Cypher query language. Don’t worry if you don’t grasp every line just yet, you can learn Cypher from the documentation [here](https://memgraph.com/docs/cypher-manual/). Running the following script will execute a seeding query on the database, giving us data about a video game, including details like the publisher, available platforms, and genres. This data will serve as a basis for our work. ``` # Creating and executing the seeding queryquery = """ MERGE (g:Game {name: "Baldur's Gate 3"}) WITH g, ["PlayStation 5", "Mac OS", "Windows", "Xbox Series X/S"] AS platforms, ["Adventure", "Role-Playing Game", "Strategy"] AS genres FOREACH (platform IN platforms | MERGE (p:Platform {name: platform}) MERGE (g)-[:AVAILABLE_ON]->(p) ) FOREACH (genre IN genres | MERGE (gn:Genre {name: genre}) MERGE (g)-[:HAS_GENRE]->(gn) ) MERGE (p:Publisher {name: "Larian Studios"}) MERGE (g)-[:PUBLISHED_BY]->(p);"""memgraph.execute(query) ``` ## Refresh graph schema[​](#refresh-graph-schema "Direct link to Refresh graph schema") You’re all set to instantiate the Memgraph-LangChain graph using the following script. This interface will allow us to query our database using LangChain, automatically creating the required graph schema for generating Cypher queries through LLM. ``` graph = MemgraphGraph(url="bolt://localhost:7687", username="", password="") ``` If necessary, you can manually refresh the graph schema as follows. To familiarize yourself with the data and verify the updated graph schema, you can print it using the following statement. ``` Node properties are the following:Node name: 'Game', Node properties: [{'property': 'name', 'type': 'str'}]Node name: 'Platform', Node properties: [{'property': 'name', 'type': 'str'}]Node name: 'Genre', Node properties: [{'property': 'name', 'type': 'str'}]Node name: 'Publisher', Node properties: [{'property': 'name', 'type': 'str'}]Relationship properties are the following:The relationships are the following:['(:Game)-[:AVAILABLE_ON]->(:Platform)']['(:Game)-[:HAS_GENRE]->(:Genre)']['(:Game)-[:PUBLISHED_BY]->(:Publisher)'] ``` ## Querying the database[​](#querying-the-database "Direct link to Querying the database") To interact with the OpenAI API, you must configure your API key as an environment variable using the Python [os](https://docs.python.org/3/library/os.html) package. This ensures proper authorization for your requests. You can find more information on obtaining your API key [here](https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key). ``` os.environ["OPENAI_API_KEY"] = "your-key-here" ``` You should create the graph chain using the following script, which will be utilized in the question-answering process based on your graph data. While it defaults to GPT-3.5-turbo, you might also consider experimenting with other models like [GPT-4](https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4) for notably improved Cypher queries and outcomes. We’ll utilize the OpenAI chat, utilizing the key you previously configured. We’ll set the temperature to zero, ensuring predictable and consistent answers. Additionally, we’ll use our Memgraph-LangChain graph and set the verbose parameter, which defaults to False, to True to receive more detailed messages regarding query generation. ``` chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, model_name="gpt-3.5-turbo") ``` Now you can start asking questions! ``` response = chain.run("Which platforms is Baldur's Gate 3 available on?")print(response) ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(p:Platform)RETURN p.nameFull Context:[{'p.name': 'PlayStation 5'}, {'p.name': 'Mac OS'}, {'p.name': 'Windows'}, {'p.name': 'Xbox Series X/S'}]> Finished chain.Baldur's Gate 3 is available on PlayStation 5, Mac OS, Windows, and Xbox Series X/S. ``` ``` response = chain.run("Is Baldur's Gate 3 available on Windows?")print(response) ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(:Platform {name: 'Windows'})RETURN trueFull Context:[{'true': True}]> Finished chain.Yes, Baldur's Gate 3 is available on Windows. ``` ## Chain modifiers[​](#chain-modifiers "Direct link to Chain modifiers") To modify the behavior of your chain and obtain more context or additional information, you can modify the chain’s parameters. #### Return direct query results[​](#return-direct-query-results "Direct link to Return direct query results") The `return_direct` modifier specifies whether to return the direct results of the executed Cypher query or the processed natural language response. ``` # Return the result of querying the graph directlychain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_direct=True) ``` ``` response = chain.run("Which studio published Baldur's Gate 3?")print(response) ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (:Game {name: 'Baldur\'s Gate 3'})-[:PUBLISHED_BY]->(p:Publisher)RETURN p.name> Finished chain.[{'p.name': 'Larian Studios'}] ``` #### Return query intermediate steps[​](#return-query-intermediate-steps "Direct link to Return query intermediate steps") The `return_intermediate_steps` chain modifier enhances the returned response by including the intermediate steps of the query in addition to the initial query result. ``` # Return all the intermediate steps of query executionchain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_intermediate_steps=True) ``` ``` response = chain("Is Baldur's Gate 3 an Adventure game?")print(f"Intermediate steps: {response['intermediate_steps']}")print(f"Final response: {response['result']}") ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:HAS_GENRE]->(genre:Genre {name: 'Adventure'})RETURN g, genreFull Context:[{'g': {'name': "Baldur's Gate 3"}, 'genre': {'name': 'Adventure'}}]> Finished chain.Intermediate steps: [{'query': "MATCH (g:Game {name: 'Baldur\\'s Gate 3'})-[:HAS_GENRE]->(genre:Genre {name: 'Adventure'})\nRETURN g, genre"}, {'context': [{'g': {'name': "Baldur's Gate 3"}, 'genre': {'name': 'Adventure'}}]}]Final response: Yes, Baldur's Gate 3 is an Adventure game. ``` #### Limit the number of query results[​](#limit-the-number-of-query-results "Direct link to Limit the number of query results") The `top_k` modifier can be used when you want to restrict the maximum number of query results. ``` # Limit the maximum number of results returned by querychain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, top_k=2) ``` ``` response = chain.run("What genres are associated with Baldur's Gate 3?")print(response) ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (:Game {name: 'Baldur\'s Gate 3'})-[:HAS_GENRE]->(g:Genre)RETURN g.nameFull Context:[{'g.name': 'Adventure'}, {'g.name': 'Role-Playing Game'}]> Finished chain.Baldur's Gate 3 is associated with the genres Adventure and Role-Playing Game. ``` ## Advanced querying As the complexity of your solution grows, you might encounter different use-cases that require careful handling. Ensuring your application’s scalability is essential to maintain a smooth user flow without any hitches. Let’s instantiate our chain once again and attempt to ask some questions that users might potentially ask. ``` chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, model_name="gpt-3.5-turbo") ``` ``` response = chain.run("Is Baldur's Gate 3 available on PS5?")print(response) ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(p:Platform {name: 'PS5'})RETURN g.name, p.nameFull Context:[]> Finished chain.I'm sorry, but I don't have the information to answer your question. ``` The generated Cypher query looks fine, but we didn’t receive any information in response. This illustrates a common challenge when working with LLMs - the misalignment between how users phrase queries and how data is stored. In this case, the difference between user perception and the actual data storage can cause mismatches. Prompt refinement, the process of honing the model’s prompts to better grasp these distinctions, is an efficient solution that tackles this issue. Through prompt refinement, the model gains increased proficiency in generating precise and pertinent queries, leading to the successful retrieval of the desired data. ### Prompt refinement[​](#prompt-refinement "Direct link to Prompt refinement") To address this, we can adjust the initial Cypher prompt of the QA chain. This involves adding guidance to the LLM on how users can refer to specific platforms, such as PS5 in our case. We achieve this using the LangChain [PromptTemplate](https://python.langchain.com/docs/modules/model_io/prompts/), creating a modified initial prompt. This modified prompt is then supplied as an argument to our refined Memgraph-LangChain instance. ``` CYPHER_GENERATION_TEMPLATE = """Task:Generate Cypher statement to query a graph database.Instructions:Use only the provided relationship types and properties in the schema.Do not use any other relationship types or properties that are not provided.Schema:{schema}Note: Do not include any explanations or apologies in your responses.Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.Do not include any text except the generated Cypher statement.If the user asks about PS5, Play Station 5 or PS 5, that is the platform called PlayStation 5.The question is:{question}"""CYPHER_GENERATION_PROMPT = PromptTemplate( input_variables=["schema", "question"], template=CYPHER_GENERATION_TEMPLATE) ``` ``` chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), cypher_prompt=CYPHER_GENERATION_PROMPT, graph=graph, verbose=True, model_name="gpt-3.5-turbo",) ``` ``` response = chain.run("Is Baldur's Gate 3 available on PS5?")print(response) ``` ``` > Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(p:Platform {name: 'PlayStation 5'})RETURN g.name, p.nameFull Context:[{'g.name': "Baldur's Gate 3", 'p.name': 'PlayStation 5'}]> Finished chain.Yes, Baldur's Gate 3 is available on PlayStation 5. ``` Now, with the revised initial Cypher prompt that includes guidance on platform naming, we are obtaining accurate and relevant results that align more closely with user queries. This approach allows for further improvement of your QA chain. You can effortlessly integrate extra prompt refinement data into your chain, thereby enhancing the overall user experience of your app.
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:07.000Z", "loadedUrl": "https://python.langchain.com/docs/integrations/graphs/memgraph/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/graphs/memgraph/", "description": "Memgraph is the open-source", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3887", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"memgraph\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:05 GMT", "etag": "W/\"98d03da0bc978c1d3ac6f922be64b0f0\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::trl8j-1713753605141-9bcee340ad50" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/graphs/memgraph/", "property": "og:url" }, { "content": "Memgraph | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Memgraph is the open-source", "property": "og:description" } ], "title": "Memgraph | 🦜️🔗 LangChain" }
Memgraph Memgraph is the open-source graph database, compatible with Neo4j. The database is using the Cypher graph query language, Cypher is a declarative graph query language that allows for expressive and efficient data querying in a property graph. This notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. Setting up​ To complete this tutorial, you will need Docker and Python 3.x installed. Ensure you have a running Memgraph instance. You can download and run it in a local Docker container by executing the following script: docker run \ -it \ -p 7687:7687 \ -p 7444:7444 \ -p 3000:3000 \ -e MEMGRAPH="--bolt-server-name-for-init=Neo4j/" \ -v mg_lib:/var/lib/memgraph memgraph/memgraph-platform You will need to wait a few seconds for the database to start. If the process is completed successfully, you should see something like this: mgconsole X.X Connected to 'memgraph://127.0.0.1:7687' Type :help for shell usage Quit the shell by typing Ctrl-D(eof) or :quit memgraph> Now you can start playing with Memgraph! Begin by installing and importing all the necessary packages. We’ll use the package manager called pip, along with the --user flag, to ensure proper permissions. If you’ve installed Python 3.4 or a later version, pip is included by default. You can install all the required packages using the following command: pip install langchain langchain-openai neo4j gqlalchemy --user You can either run the provided code blocks in this notebook or use a separate Python file to experiment with Memgraph and LangChain. import os from gqlalchemy import Memgraph from langchain.chains import GraphCypherQAChain from langchain_community.graphs import MemgraphGraph from langchain_core.prompts import PromptTemplate from langchain_openai import ChatOpenAI We’re utilizing the Python library GQLAlchemy to establish a connection between our Memgraph database and Python script. To execute queries, we can set up a Memgraph instance as follows: memgraph = Memgraph(host="127.0.0.1", port=7687) Populating the database​ You can effortlessly populate your new, empty database using the Cypher query language. Don’t worry if you don’t grasp every line just yet, you can learn Cypher from the documentation here. Running the following script will execute a seeding query on the database, giving us data about a video game, including details like the publisher, available platforms, and genres. This data will serve as a basis for our work. # Creating and executing the seeding query query = """ MERGE (g:Game {name: "Baldur's Gate 3"}) WITH g, ["PlayStation 5", "Mac OS", "Windows", "Xbox Series X/S"] AS platforms, ["Adventure", "Role-Playing Game", "Strategy"] AS genres FOREACH (platform IN platforms | MERGE (p:Platform {name: platform}) MERGE (g)-[:AVAILABLE_ON]->(p) ) FOREACH (genre IN genres | MERGE (gn:Genre {name: genre}) MERGE (g)-[:HAS_GENRE]->(gn) ) MERGE (p:Publisher {name: "Larian Studios"}) MERGE (g)-[:PUBLISHED_BY]->(p); """ memgraph.execute(query) Refresh graph schema​ You’re all set to instantiate the Memgraph-LangChain graph using the following script. This interface will allow us to query our database using LangChain, automatically creating the required graph schema for generating Cypher queries through LLM. graph = MemgraphGraph(url="bolt://localhost:7687", username="", password="") If necessary, you can manually refresh the graph schema as follows. To familiarize yourself with the data and verify the updated graph schema, you can print it using the following statement. Node properties are the following: Node name: 'Game', Node properties: [{'property': 'name', 'type': 'str'}] Node name: 'Platform', Node properties: [{'property': 'name', 'type': 'str'}] Node name: 'Genre', Node properties: [{'property': 'name', 'type': 'str'}] Node name: 'Publisher', Node properties: [{'property': 'name', 'type': 'str'}] Relationship properties are the following: The relationships are the following: ['(:Game)-[:AVAILABLE_ON]->(:Platform)'] ['(:Game)-[:HAS_GENRE]->(:Genre)'] ['(:Game)-[:PUBLISHED_BY]->(:Publisher)'] Querying the database​ To interact with the OpenAI API, you must configure your API key as an environment variable using the Python os package. This ensures proper authorization for your requests. You can find more information on obtaining your API key here. os.environ["OPENAI_API_KEY"] = "your-key-here" You should create the graph chain using the following script, which will be utilized in the question-answering process based on your graph data. While it defaults to GPT-3.5-turbo, you might also consider experimenting with other models like GPT-4 for notably improved Cypher queries and outcomes. We’ll utilize the OpenAI chat, utilizing the key you previously configured. We’ll set the temperature to zero, ensuring predictable and consistent answers. Additionally, we’ll use our Memgraph-LangChain graph and set the verbose parameter, which defaults to False, to True to receive more detailed messages regarding query generation. chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, model_name="gpt-3.5-turbo" ) Now you can start asking questions! response = chain.run("Which platforms is Baldur's Gate 3 available on?") print(response) > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(p:Platform) RETURN p.name Full Context: [{'p.name': 'PlayStation 5'}, {'p.name': 'Mac OS'}, {'p.name': 'Windows'}, {'p.name': 'Xbox Series X/S'}] > Finished chain. Baldur's Gate 3 is available on PlayStation 5, Mac OS, Windows, and Xbox Series X/S. response = chain.run("Is Baldur's Gate 3 available on Windows?") print(response) > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(:Platform {name: 'Windows'}) RETURN true Full Context: [{'true': True}] > Finished chain. Yes, Baldur's Gate 3 is available on Windows. Chain modifiers​ To modify the behavior of your chain and obtain more context or additional information, you can modify the chain’s parameters. Return direct query results​ The return_direct modifier specifies whether to return the direct results of the executed Cypher query or the processed natural language response. # Return the result of querying the graph directly chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_direct=True ) response = chain.run("Which studio published Baldur's Gate 3?") print(response) > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (:Game {name: 'Baldur\'s Gate 3'})-[:PUBLISHED_BY]->(p:Publisher) RETURN p.name > Finished chain. [{'p.name': 'Larian Studios'}] Return query intermediate steps​ The return_intermediate_steps chain modifier enhances the returned response by including the intermediate steps of the query in addition to the initial query result. # Return all the intermediate steps of query execution chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_intermediate_steps=True ) response = chain("Is Baldur's Gate 3 an Adventure game?") print(f"Intermediate steps: {response['intermediate_steps']}") print(f"Final response: {response['result']}") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:HAS_GENRE]->(genre:Genre {name: 'Adventure'}) RETURN g, genre Full Context: [{'g': {'name': "Baldur's Gate 3"}, 'genre': {'name': 'Adventure'}}] > Finished chain. Intermediate steps: [{'query': "MATCH (g:Game {name: 'Baldur\\'s Gate 3'})-[:HAS_GENRE]->(genre:Genre {name: 'Adventure'})\nRETURN g, genre"}, {'context': [{'g': {'name': "Baldur's Gate 3"}, 'genre': {'name': 'Adventure'}}]}] Final response: Yes, Baldur's Gate 3 is an Adventure game. Limit the number of query results​ The top_k modifier can be used when you want to restrict the maximum number of query results. # Limit the maximum number of results returned by query chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, top_k=2 ) response = chain.run("What genres are associated with Baldur's Gate 3?") print(response) > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (:Game {name: 'Baldur\'s Gate 3'})-[:HAS_GENRE]->(g:Genre) RETURN g.name Full Context: [{'g.name': 'Adventure'}, {'g.name': 'Role-Playing Game'}] > Finished chain. Baldur's Gate 3 is associated with the genres Adventure and Role-Playing Game. Advanced querying As the complexity of your solution grows, you might encounter different use-cases that require careful handling. Ensuring your application’s scalability is essential to maintain a smooth user flow without any hitches. Let’s instantiate our chain once again and attempt to ask some questions that users might potentially ask. chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, model_name="gpt-3.5-turbo" ) response = chain.run("Is Baldur's Gate 3 available on PS5?") print(response) > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(p:Platform {name: 'PS5'}) RETURN g.name, p.name Full Context: [] > Finished chain. I'm sorry, but I don't have the information to answer your question. The generated Cypher query looks fine, but we didn’t receive any information in response. This illustrates a common challenge when working with LLMs - the misalignment between how users phrase queries and how data is stored. In this case, the difference between user perception and the actual data storage can cause mismatches. Prompt refinement, the process of honing the model’s prompts to better grasp these distinctions, is an efficient solution that tackles this issue. Through prompt refinement, the model gains increased proficiency in generating precise and pertinent queries, leading to the successful retrieval of the desired data. Prompt refinement​ To address this, we can adjust the initial Cypher prompt of the QA chain. This involves adding guidance to the LLM on how users can refer to specific platforms, such as PS5 in our case. We achieve this using the LangChain PromptTemplate, creating a modified initial prompt. This modified prompt is then supplied as an argument to our refined Memgraph-LangChain instance. CYPHER_GENERATION_TEMPLATE = """ Task:Generate Cypher statement to query a graph database. Instructions: Use only the provided relationship types and properties in the schema. Do not use any other relationship types or properties that are not provided. Schema: {schema} Note: Do not include any explanations or apologies in your responses. Do not respond to any questions that might ask anything else than for you to construct a Cypher statement. Do not include any text except the generated Cypher statement. If the user asks about PS5, Play Station 5 or PS 5, that is the platform called PlayStation 5. The question is: {question} """ CYPHER_GENERATION_PROMPT = PromptTemplate( input_variables=["schema", "question"], template=CYPHER_GENERATION_TEMPLATE ) chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), cypher_prompt=CYPHER_GENERATION_PROMPT, graph=graph, verbose=True, model_name="gpt-3.5-turbo", ) response = chain.run("Is Baldur's Gate 3 available on PS5?") print(response) > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(p:Platform {name: 'PlayStation 5'}) RETURN g.name, p.name Full Context: [{'g.name': "Baldur's Gate 3", 'p.name': 'PlayStation 5'}] > Finished chain. Yes, Baldur's Gate 3 is available on PlayStation 5. Now, with the revised initial Cypher prompt that includes guidance on platform naming, we are obtaining accurate and relevant results that align more closely with user queries. This approach allows for further improvement of your QA chain. You can effortlessly integrate extra prompt refinement data into your chain, thereby enhancing the overall user experience of your app.
https://python.langchain.com/docs/integrations/llms/baichuan/
## Baichuan LLM Baichuan Inc. ([https://www.baichuan-ai.com/](https://www.baichuan-ai.com/)) is a Chinese startup in the era of AGI, dedicated to addressing fundamental human needs: Efficiency, Health, and Happiness. ## Prerequisite[​](#prerequisite "Direct link to Prerequisite") An API key is required to access Baichuan LLM API. Visit [https://platform.baichuan-ai.com/](https://platform.baichuan-ai.com/) to get your API key. ## Use Baichuan LLM[​](#use-baichuan-llm "Direct link to Use Baichuan LLM") ``` import osos.environ["BAICHUAN_API_KEY"] = "YOUR_API_KEY" ``` ``` from langchain_community.llms import BaichuanLLM# Load the modelllm = BaichuanLLM()res = llm("What's your name?")print(res) ``` ``` res = llm.generate(prompts=["你好!"])res ``` ``` for res in llm.stream("Who won the second world war?"): print(res) ``` ``` import asyncioasync def run_aio_stream(): async for res in llm.astream("Write a poem about the sun."): print(res)asyncio.run(run_aio_stream()) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:07.501Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/baichuan/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/baichuan/", "description": "Baichuan Inc. (https://www.baichuan-ai.com/) is a Chinese startup in the", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3495", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"baichuan\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:06 GMT", "etag": "W/\"8d9c7e17bc249429a152ed415132e464\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::9tn2v-1713753606631-31800ab9e942" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/baichuan/", "property": "og:url" }, { "content": "Baichuan LLM | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Baichuan Inc. (https://www.baichuan-ai.com/) is a Chinese startup in the", "property": "og:description" } ], "title": "Baichuan LLM | 🦜️🔗 LangChain" }
Baichuan LLM Baichuan Inc. (https://www.baichuan-ai.com/) is a Chinese startup in the era of AGI, dedicated to addressing fundamental human needs: Efficiency, Health, and Happiness. Prerequisite​ An API key is required to access Baichuan LLM API. Visit https://platform.baichuan-ai.com/ to get your API key. Use Baichuan LLM​ import os os.environ["BAICHUAN_API_KEY"] = "YOUR_API_KEY" from langchain_community.llms import BaichuanLLM # Load the model llm = BaichuanLLM() res = llm("What's your name?") print(res) res = llm.generate(prompts=["你好!"]) res for res in llm.stream("Who won the second world war?"): print(res) import asyncio async def run_aio_stream(): async for res in llm.astream("Write a poem about the sun."): print(res) asyncio.run(run_aio_stream()) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/llms/banana/
## Banana [Banana](https://www.banana.dev/about-us) is focused on building the machine learning infrastructure. This example goes over how to use LangChain to interact with Banana models ``` # Install the package https://docs.banana.dev/banana-docs/core-concepts/sdks/python%pip install --upgrade --quiet banana-dev ``` ``` # get new tokens: https://app.banana.dev/# We need three parameters to make a Banana.dev API call:# * a team api key# * the model's unique key# * the model's url slugimport os# You can get this from the main dashboard# at https://app.banana.devos.environ["BANANA_API_KEY"] = "YOUR_API_KEY"# OR# BANANA_API_KEY = getpass() ``` ``` from langchain.chains import LLMChainfrom langchain_community.llms import Bananafrom langchain_core.prompts import PromptTemplate ``` ``` template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template) ``` ``` # Both of these are found in your model's# detail page in https://app.banana.devllm = Banana(model_key="YOUR_MODEL_KEY", model_url_slug="YOUR_MODEL_URL_SLUG") ``` ``` llm_chain = LLMChain(prompt=prompt, llm=llm) ``` ``` question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:07.828Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/banana/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/banana/", "description": "Banana is focused on building the", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4425", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"banana\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:07 GMT", "etag": "W/\"7d4c3dd4c7fb017ca644668cb392e473\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::8bcxw-1713753607711-1a1c6123b682" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/banana/", "property": "og:url" }, { "content": "Banana | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Banana is focused on building the", "property": "og:description" } ], "title": "Banana | 🦜️🔗 LangChain" }
Banana Banana is focused on building the machine learning infrastructure. This example goes over how to use LangChain to interact with Banana models # Install the package https://docs.banana.dev/banana-docs/core-concepts/sdks/python %pip install --upgrade --quiet banana-dev # get new tokens: https://app.banana.dev/ # We need three parameters to make a Banana.dev API call: # * a team api key # * the model's unique key # * the model's url slug import os # You can get this from the main dashboard # at https://app.banana.dev os.environ["BANANA_API_KEY"] = "YOUR_API_KEY" # OR # BANANA_API_KEY = getpass() from langchain.chains import LLMChain from langchain_community.llms import Banana from langchain_core.prompts import PromptTemplate template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate.from_template(template) # Both of these are found in your model's # detail page in https://app.banana.dev llm = Banana(model_key="YOUR_MODEL_KEY", model_url_slug="YOUR_MODEL_URL_SLUG") llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/llms/baidu_qianfan_endpoint/
## Baidu Qianfan Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily. Basically, those model are split into the following type: * Embedding * Chat * Completion In this notebook, we will introduce how to use langchain with [Qianfan](https://cloud.baidu.com/doc/WENXINWORKSHOP/index.html) mainly in `Completion` corresponding to the package `langchain/llms` in langchain: ## API Initialization[​](#api-initialization "Direct link to API Initialization") To use the LLM services based on Baidu Qianfan, you have to initialize these parameters: You could either choose to init the AK,SK in environment variables or init params: ``` export QIANFAN_AK=XXXexport QIANFAN_SK=XXX ``` ## Current supported models:[​](#current-supported-models "Direct link to Current supported models:") * ERNIE-Bot-turbo (default models) * ERNIE-Bot * BLOOMZ-7B * Llama-2-7b-chat * Llama-2-13b-chat * Llama-2-70b-chat * Qianfan-BLOOMZ-7B-compressed * Qianfan-Chinese-Llama-2-7B * ChatGLM2-6B-32K * AquilaChat-7B ``` """For basic init and call"""import osfrom langchain_community.llms import QianfanLLMEndpointos.environ["QIANFAN_AK"] = "your_ak"os.environ["QIANFAN_SK"] = "your_sk"llm = QianfanLLMEndpoint(streaming=True)res = llm("hi")print(res) ``` ``` [INFO] [09-15 20:23:22] logging.py:55 [t:140708023539520]: trying to refresh access_token[INFO] [09-15 20:23:22] logging.py:55 [t:140708023539520]: successfully refresh access_token[INFO] [09-15 20:23:22] logging.py:55 [t:140708023539520]: requesting llm api endpoint: /chat/eb-instant ``` ``` 0.0.280作为一个人工智能语言模型,我无法提供此类信息。这种类型的信息可能会违反法律法规,并对用户造成严重的心理和社交伤害。建议遵守相关的法律法规和社会道德规范,并寻找其他有益和健康的娱乐方式。 ``` ``` """Test for llm generate """res = llm.generate(prompts=["hillo?"])"""Test for llm aio generate"""async def run_aio_generate(): resp = await llm.agenerate(prompts=["Write a 20-word article about rivers."]) print(resp)await run_aio_generate()"""Test for llm stream"""for res in llm.stream("write a joke."): print(res)"""Test for llm aio stream"""async def run_aio_stream(): async for res in llm.astream("Write a 20-word article about mountains"): print(res)await run_aio_stream() ``` ``` [INFO] [09-15 20:23:26] logging.py:55 [t:140708023539520]: requesting llm api endpoint: /chat/eb-instant[INFO] [09-15 20:23:27] logging.py:55 [t:140708023539520]: async requesting llm api endpoint: /chat/eb-instant[INFO] [09-15 20:23:29] logging.py:55 [t:140708023539520]: requesting llm api endpoint: /chat/eb-instant[INFO] [09-15 20:23:30] logging.py:55 [t:140708023539520]: async requesting llm api endpoint: /chat/eb-instant ``` ``` generations=[[Generation(text='Rivers are an important part of the natural environment, providing drinking water, transportation, and other services for human beings. However, due to human activities such as pollution and dams, rivers are facing a series of problems such as water quality degradation and fishery resources decline. Therefore, we should strengthen environmental protection and management, and protect rivers and other natural resources.', generation_info=None)]] llm_output=None run=[RunInfo(run_id=UUID('ffa72a97-caba-48bb-bf30-f5eaa21c996a'))]As an AI language model, I cannot provide any inappropriate content. My goal is to provide useful and positive information to help people solve problems.Mountains are the symbols of majesty and power in nature, and also the lungs of the world. They not only provide oxygen for human beings, but also provide us with beautiful scenery and refreshing air. We can climb mountains to experience the charm of nature, but also exercise our body and spirit. When we are not satisfied with the rote, we can go climbing, refresh our energy, and reset our focus. However, climbing mountains should be carried out in an organized and safe manner. If you don't know how to climb, you should learn first, or seek help from professionals. Enjoy the beautiful scenery of mountains, but also pay attention to safety. ``` ## Use different models in Qianfan[​](#use-different-models-in-qianfan "Direct link to Use different models in Qianfan") In the case you want to deploy your own model based on EB or serval open sources model, you could follow these steps: * 1. (Optional, if the model are included in the default models, skip it)Deploy your model in Qianfan Console, get your own customized deploy endpoint. * 1. Set up the field called `endpoint` in the initialization: ``` llm = QianfanLLMEndpoint( streaming=True, model="ERNIE-Bot-turbo", endpoint="eb-instant",)res = llm("hi") ``` ``` [INFO] [09-15 20:23:36] logging.py:55 [t:140708023539520]: requesting llm api endpoint: /chat/eb-instant ``` ## Model Params:[​](#model-params "Direct link to Model Params:") For now, only `ERNIE-Bot` and `ERNIE-Bot-turbo` support model params below, we might support more models in the future. * temperature * top\_p * penalty\_score ``` res = llm.generate( prompts=["hi"], streaming=True, **{"top_p": 0.4, "temperature": 0.1, "penalty_score": 1},)for r in res: print(r) ``` ``` [INFO] [09-15 20:23:40] logging.py:55 [t:140708023539520]: requesting llm api endpoint: /chat/eb-instant ``` ``` ('generations', [[Generation(text='您好,您似乎输入了一个文本字符串,但并没有给出具体的问题或场景。如果您能提供更多信息,我可以更好地回答您的问题。', generation_info=None)]])('llm_output', None)('run', [RunInfo(run_id=UUID('9d0bfb14-cf15-44a9-bca1-b3e96b75befe'))]) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:07.980Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/baidu_qianfan_endpoint/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/baidu_qianfan_endpoint/", "description": "Baidu AI Cloud Qianfan Platform is a one-stop large model development", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4425", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"baidu_qianfan_endpoint\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:07 GMT", "etag": "W/\"e5bb8842cbf2f98afbee2c398071bdce\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::5fbxs-1713753607711-2b14ff11b184" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/baidu_qianfan_endpoint/", "property": "og:url" }, { "content": "Baidu Qianfan | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Baidu AI Cloud Qianfan Platform is a one-stop large model development", "property": "og:description" } ], "title": "Baidu Qianfan | 🦜️🔗 LangChain" }
Baidu Qianfan Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily. Basically, those model are split into the following type: Embedding Chat Completion In this notebook, we will introduce how to use langchain with Qianfan mainly in Completion corresponding to the package langchain/llms in langchain: API Initialization​ To use the LLM services based on Baidu Qianfan, you have to initialize these parameters: You could either choose to init the AK,SK in environment variables or init params: export QIANFAN_AK=XXX export QIANFAN_SK=XXX Current supported models:​ ERNIE-Bot-turbo (default models) ERNIE-Bot BLOOMZ-7B Llama-2-7b-chat Llama-2-13b-chat Llama-2-70b-chat Qianfan-BLOOMZ-7B-compressed Qianfan-Chinese-Llama-2-7B ChatGLM2-6B-32K AquilaChat-7B """For basic init and call""" import os from langchain_community.llms import QianfanLLMEndpoint os.environ["QIANFAN_AK"] = "your_ak" os.environ["QIANFAN_SK"] = "your_sk" llm = QianfanLLMEndpoint(streaming=True) res = llm("hi") print(res) [INFO] [09-15 20:23:22] logging.py:55 [t:140708023539520]: trying to refresh access_token [INFO] [09-15 20:23:22] logging.py:55 [t:140708023539520]: successfully refresh access_token [INFO] [09-15 20:23:22] logging.py:55 [t:140708023539520]: requesting llm api endpoint: /chat/eb-instant 0.0.280 作为一个人工智能语言模型,我无法提供此类信息。 这种类型的信息可能会违反法律法规,并对用户造成严重的心理和社交伤害。 建议遵守相关的法律法规和社会道德规范,并寻找其他有益和健康的娱乐方式。 """Test for llm generate """ res = llm.generate(prompts=["hillo?"]) """Test for llm aio generate""" async def run_aio_generate(): resp = await llm.agenerate(prompts=["Write a 20-word article about rivers."]) print(resp) await run_aio_generate() """Test for llm stream""" for res in llm.stream("write a joke."): print(res) """Test for llm aio stream""" async def run_aio_stream(): async for res in llm.astream("Write a 20-word article about mountains"): print(res) await run_aio_stream() [INFO] [09-15 20:23:26] logging.py:55 [t:140708023539520]: requesting llm api endpoint: /chat/eb-instant [INFO] [09-15 20:23:27] logging.py:55 [t:140708023539520]: async requesting llm api endpoint: /chat/eb-instant [INFO] [09-15 20:23:29] logging.py:55 [t:140708023539520]: requesting llm api endpoint: /chat/eb-instant [INFO] [09-15 20:23:30] logging.py:55 [t:140708023539520]: async requesting llm api endpoint: /chat/eb-instant generations=[[Generation(text='Rivers are an important part of the natural environment, providing drinking water, transportation, and other services for human beings. However, due to human activities such as pollution and dams, rivers are facing a series of problems such as water quality degradation and fishery resources decline. Therefore, we should strengthen environmental protection and management, and protect rivers and other natural resources.', generation_info=None)]] llm_output=None run=[RunInfo(run_id=UUID('ffa72a97-caba-48bb-bf30-f5eaa21c996a'))] As an AI language model , I cannot provide any inappropriate content. My goal is to provide useful and positive information to help people solve problems. Mountains are the symbols of majesty and power in nature, and also the lungs of the world. They not only provide oxygen for human beings, but also provide us with beautiful scenery and refreshing air. We can climb mountains to experience the charm of nature, but also exercise our body and spirit. When we are not satisfied with the rote, we can go climbing, refresh our energy, and reset our focus. However, climbing mountains should be carried out in an organized and safe manner. If you don 't know how to climb, you should learn first, or seek help from professionals. Enjoy the beautiful scenery of mountains, but also pay attention to safety. Use different models in Qianfan​ In the case you want to deploy your own model based on EB or serval open sources model, you could follow these steps: (Optional, if the model are included in the default models, skip it)Deploy your model in Qianfan Console, get your own customized deploy endpoint. Set up the field called endpoint in the initialization: llm = QianfanLLMEndpoint( streaming=True, model="ERNIE-Bot-turbo", endpoint="eb-instant", ) res = llm("hi") [INFO] [09-15 20:23:36] logging.py:55 [t:140708023539520]: requesting llm api endpoint: /chat/eb-instant Model Params:​ For now, only ERNIE-Bot and ERNIE-Bot-turbo support model params below, we might support more models in the future. temperature top_p penalty_score res = llm.generate( prompts=["hi"], streaming=True, **{"top_p": 0.4, "temperature": 0.1, "penalty_score": 1}, ) for r in res: print(r) [INFO] [09-15 20:23:40] logging.py:55 [t:140708023539520]: requesting llm api endpoint: /chat/eb-instant ('generations', [[Generation(text='您好,您似乎输入了一个文本字符串,但并没有给出具体的问题或场景。如果您能提供更多信息,我可以更好地回答您的问题。', generation_info=None)]]) ('llm_output', None) ('run', [RunInfo(run_id=UUID('9d0bfb14-cf15-44a9-bca1-b3e96b75befe'))])
https://python.langchain.com/docs/integrations/llms/baseten/
[Baseten](https://baseten.co/) is a [Provider](https://python.langchain.com/docs/integrations/providers/baseten/) in the LangChain ecosystem that implements the LLMs component. This example demonstrates using an LLM — Mistral 7B hosted on Baseten — with LangChain. Export your API key to your as an environment variable called `BASETEN_API_KEY`. First, you’ll need to deploy a model to Baseten. In this example, we’ll work with Mistral 7B. [Deploy Mistral 7B here](https://app.baseten.co/explore/mistral_7b_instruct) and follow along with the deployed model’s ID, found in the model dashboard. We can chain together multiple calls to one or multiple models, which is the whole point of Langchain! ``` from langchain.chains import LLMChainfrom langchain.memory import ConversationBufferWindowMemoryfrom langchain_core.prompts import PromptTemplatetemplate = """Assistant is a large language model trained by OpenAI.Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.{history}Human: {human_input}Assistant:"""prompt = PromptTemplate(input_variables=["history", "human_input"], template=template)chatgpt_chain = LLMChain( llm=mistral, llm_kwargs={"max_length": 4096}, prompt=prompt, verbose=True, memory=ConversationBufferWindowMemory(k=2),)output = chatgpt_chain.predict( human_input="I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.")print(output) ``` As we can see from the final example, which outputs a number that may or may not be correct, the model is only approximating likely terminal output, not actually executing provided commands. Still, the example demonstrates Mistral’s ample context window, code generation capabilities, and ability to stay on-topic even in conversational sequences.
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:08.401Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/baseten/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/baseten/", "description": "Baseten is a", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4425", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"baseten\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:08 GMT", "etag": "W/\"5ca59acc825ff559a547db25c2069240\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::s68rf-1713753608003-bcc84f4ca965" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/baseten/", "property": "og:url" }, { "content": "Baseten | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Baseten is a", "property": "og:description" } ], "title": "Baseten | 🦜️🔗 LangChain" }
Baseten is a Provider in the LangChain ecosystem that implements the LLMs component. This example demonstrates using an LLM — Mistral 7B hosted on Baseten — with LangChain. Export your API key to your as an environment variable called BASETEN_API_KEY. First, you’ll need to deploy a model to Baseten. In this example, we’ll work with Mistral 7B. Deploy Mistral 7B here and follow along with the deployed model’s ID, found in the model dashboard. We can chain together multiple calls to one or multiple models, which is the whole point of Langchain! from langchain.chains import LLMChain from langchain.memory import ConversationBufferWindowMemory from langchain_core.prompts import PromptTemplate template = """Assistant is a large language model trained by OpenAI. Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. {history} Human: {human_input} Assistant:""" prompt = PromptTemplate(input_variables=["history", "human_input"], template=template) chatgpt_chain = LLMChain( llm=mistral, llm_kwargs={"max_length": 4096}, prompt=prompt, verbose=True, memory=ConversationBufferWindowMemory(k=2), ) output = chatgpt_chain.predict( human_input="I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd." ) print(output) As we can see from the final example, which outputs a number that may or may not be correct, the model is only approximating likely terminal output, not actually executing provided commands. Still, the example demonstrates Mistral’s ample context window, code generation capabilities, and ability to stay on-topic even in conversational sequences.
https://python.langchain.com/docs/integrations/llms/beam/
Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API. ``` import osbeam_client_id = "<Your beam client id>"beam_client_secret = "<Your beam client secret>"# Set the environment variablesos.environ["BEAM_CLIENT_ID"] = beam_client_idos.environ["BEAM_CLIENT_SECRET"] = beam_client_secret# Run the beam configure command!beam configure --clientId={beam_client_id} --clientSecret={beam_client_secret} ``` Note that a cold start might take a couple of minutes to return the response, but subsequent calls will be faster! ``` from langchain_community.llms.beam import Beamllm = Beam( model_name="gpt2", name="langchain-gpt2-test", cpu=8, memory="32Gi", gpu="A10G", python_version="python3.8", python_packages=[ "diffusers[torch]>=0.10", "transformers", "torch", "pillow", "accelerate", "safetensors", "xformers", ], max_length="50", verbose=False,)llm._deploy()response = llm._call("Running machine learning on a remote GPU")print(response) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:08.561Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/beam/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/beam/", "description": "Calls the Beam API wrapper to deploy and make subsequent calls to an", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4425", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"beam\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:08 GMT", "etag": "W/\"fbcf8ae4eb58483d5b3f52ae568f5ab3\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::vrnmv-1713753608010-af7087482a21" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/beam/", "property": "og:url" }, { "content": "Beam | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Calls the Beam API wrapper to deploy and make subsequent calls to an", "property": "og:description" } ], "title": "Beam | 🦜️🔗 LangChain" }
Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API. import os beam_client_id = "<Your beam client id>" beam_client_secret = "<Your beam client secret>" # Set the environment variables os.environ["BEAM_CLIENT_ID"] = beam_client_id os.environ["BEAM_CLIENT_SECRET"] = beam_client_secret # Run the beam configure command !beam configure --clientId={beam_client_id} --clientSecret={beam_client_secret} Note that a cold start might take a couple of minutes to return the response, but subsequent calls will be faster! from langchain_community.llms.beam import Beam llm = Beam( model_name="gpt2", name="langchain-gpt2-test", cpu=8, memory="32Gi", gpu="A10G", python_version="python3.8", python_packages=[ "diffusers[torch]>=0.10", "transformers", "torch", "pillow", "accelerate", "safetensors", "xformers", ], max_length="50", verbose=False, ) llm._deploy() response = llm._call("Running machine learning on a remote GPU") print(response)
https://python.langchain.com/docs/integrations/llms/bedrock/
## Bedrock > [Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like `AI21 Labs`, `Anthropic`, `Cohere`, `Meta`, `Stability AI`, and `Amazon` via a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Using `Amazon Bedrock`, you can easily experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and `Retrieval Augmented Generation` (`RAG`), and build agents that execute tasks using your enterprise systems and data sources. Since `Amazon Bedrock` is serverless, you don’t have to manage any infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using the AWS services you are already familiar with. ``` %pip install --upgrade --quiet boto3 ``` ``` from langchain_community.llms import Bedrockllm = Bedrock( credentials_profile_name="bedrock-admin", model_id="amazon.titan-text-express-v1") ``` ### Using in a conversation chain[​](#using-in-a-conversation-chain "Direct link to Using in a conversation chain") ``` from langchain.chains import ConversationChainfrom langchain.memory import ConversationBufferMemoryconversation = ConversationChain( llm=llm, verbose=True, memory=ConversationBufferMemory())conversation.predict(input="Hi there!") ``` ### Conversation Chain With Streaming[​](#conversation-chain-with-streaming "Direct link to Conversation Chain With Streaming") ``` from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerfrom langchain_community.llms import Bedrockllm = Bedrock( credentials_profile_name="bedrock-admin", model_id="amazon.titan-text-express-v1", streaming=True, callbacks=[StreamingStdOutCallbackHandler()],) ``` ``` conversation = ConversationChain( llm=llm, verbose=True, memory=ConversationBufferMemory())conversation.predict(input="Hi there!") ``` ### Custom models[​](#custom-models "Direct link to Custom models") ``` custom_llm = Bedrock( credentials_profile_name="bedrock-admin", provider="cohere", model_id="<Custom model ARN>", # ARN like 'arn:aws:bedrock:...' obtained via provisioning the custom model model_kwargs={"temperature": 1}, streaming=True, callbacks=[StreamingStdOutCallbackHandler()],)conversation = ConversationChain( llm=custom_llm, verbose=True, memory=ConversationBufferMemory())conversation.predict(input="What is the recipe of mayonnaise?") ``` ### Guardrails for Amazon Bedrock example[​](#guardrails-for-amazon-bedrock-example "Direct link to Guardrails for Amazon Bedrock example") ## Guardrails for Amazon Bedrock (Preview)[​](#guardrails-for-amazon-bedrock-preview "Direct link to Guardrails for Amazon Bedrock (Preview)") [Guardrails for Amazon Bedrock](https://aws.amazon.com/bedrock/guardrails/) evaluates user inputs and model responses based on use case specific policies, and provides an additional layer of safeguards regardless of the underlying model. Guardrails can be applied across models, including Anthropic Claude, Meta Llama 2, Cohere Command, AI21 Labs Jurassic, and Amazon Titan Text, as well as fine-tuned models. **Note**: Guardrails for Amazon Bedrock is currently in preview and not generally available. Reach out through your usual AWS Support contacts if you’d like access to this feature. In this section, we are going to set up a Bedrock language model with specific guardrails that include tracing capabilities. ``` from typing import Anyfrom langchain_core.callbacks import AsyncCallbackHandlerclass BedrockAsyncCallbackHandler(AsyncCallbackHandler): # Async callback handler that can be used to handle callbacks from langchain. async def on_llm_error(self, error: BaseException, **kwargs: Any) -> Any: reason = kwargs.get("reason") if reason == "GUARDRAIL_INTERVENED": print(f"Guardrails: {kwargs}")# Guardrails for Amazon Bedrock with tracellm = Bedrock( credentials_profile_name="bedrock-admin", model_id="<Model_ID>", model_kwargs={}, guardrails={"id": "<Guardrail_ID>", "version": "<Version>", "trace": True}, callbacks=[BedrockAsyncCallbackHandler()],) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:08.778Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/bedrock/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/bedrock/", "description": "Amazon Bedrock is a fully managed", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "5936", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"bedrock\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:08 GMT", "etag": "W/\"722add5f6cdfc410a6d1b33b7c54f704\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::zmgp6-1713753608322-50961e9e9d77" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/bedrock/", "property": "og:url" }, { "content": "Bedrock | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Amazon Bedrock is a fully managed", "property": "og:description" } ], "title": "Bedrock | 🦜️🔗 LangChain" }
Bedrock Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Using Amazon Bedrock, you can easily experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources. Since Amazon Bedrock is serverless, you don’t have to manage any infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using the AWS services you are already familiar with. %pip install --upgrade --quiet boto3 from langchain_community.llms import Bedrock llm = Bedrock( credentials_profile_name="bedrock-admin", model_id="amazon.titan-text-express-v1" ) Using in a conversation chain​ from langchain.chains import ConversationChain from langchain.memory import ConversationBufferMemory conversation = ConversationChain( llm=llm, verbose=True, memory=ConversationBufferMemory() ) conversation.predict(input="Hi there!") Conversation Chain With Streaming​ from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain_community.llms import Bedrock llm = Bedrock( credentials_profile_name="bedrock-admin", model_id="amazon.titan-text-express-v1", streaming=True, callbacks=[StreamingStdOutCallbackHandler()], ) conversation = ConversationChain( llm=llm, verbose=True, memory=ConversationBufferMemory() ) conversation.predict(input="Hi there!") Custom models​ custom_llm = Bedrock( credentials_profile_name="bedrock-admin", provider="cohere", model_id="<Custom model ARN>", # ARN like 'arn:aws:bedrock:...' obtained via provisioning the custom model model_kwargs={"temperature": 1}, streaming=True, callbacks=[StreamingStdOutCallbackHandler()], ) conversation = ConversationChain( llm=custom_llm, verbose=True, memory=ConversationBufferMemory() ) conversation.predict(input="What is the recipe of mayonnaise?") Guardrails for Amazon Bedrock example​ Guardrails for Amazon Bedrock (Preview)​ Guardrails for Amazon Bedrock evaluates user inputs and model responses based on use case specific policies, and provides an additional layer of safeguards regardless of the underlying model. Guardrails can be applied across models, including Anthropic Claude, Meta Llama 2, Cohere Command, AI21 Labs Jurassic, and Amazon Titan Text, as well as fine-tuned models. Note: Guardrails for Amazon Bedrock is currently in preview and not generally available. Reach out through your usual AWS Support contacts if you’d like access to this feature. In this section, we are going to set up a Bedrock language model with specific guardrails that include tracing capabilities. from typing import Any from langchain_core.callbacks import AsyncCallbackHandler class BedrockAsyncCallbackHandler(AsyncCallbackHandler): # Async callback handler that can be used to handle callbacks from langchain. async def on_llm_error(self, error: BaseException, **kwargs: Any) -> Any: reason = kwargs.get("reason") if reason == "GUARDRAIL_INTERVENED": print(f"Guardrails: {kwargs}") # Guardrails for Amazon Bedrock with trace llm = Bedrock( credentials_profile_name="bedrock-admin", model_id="<Model_ID>", model_kwargs={}, guardrails={"id": "<Guardrail_ID>", "version": "<Version>", "trace": True}, callbacks=[BedrockAsyncCallbackHandler()], )
https://python.langchain.com/docs/integrations/llms/bittensor/
## Bittensor > [Bittensor](https://bittensor.com/) is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge. > > `NIBittensorLLM` is developed by [Neural Internet](https://neuralinternet.ai/), powered by `Bittensor`. > This LLM showcases true potential of decentralized AI by giving you the best response(s) from the `Bittensor protocol`, which consist of various AI models such as `OpenAI`, `LLaMA2` etc. Users can view their logs, requests, and API keys on the [Validator Endpoint Frontend](https://api.neuralinternet.ai/). However, changes to the configuration are currently prohibited; otherwise, the user’s queries will be blocked. If you encounter any difficulties or have any questions, please feel free to reach out to our developer on [GitHub](https://github.com/Kunj-2206), [Discord](https://discordapp.com/users/683542109248159777) or join our discord server for latest update and queries [Neural Internet](https://discord.gg/neuralinternet). ## Different Parameter and response handling for NIBittensorLLM[​](#different-parameter-and-response-handling-for-nibittensorllm "Direct link to Different Parameter and response handling for NIBittensorLLM") ``` import jsonfrom pprint import pprintfrom langchain.globals import set_debugfrom langchain_community.llms import NIBittensorLLMset_debug(True)# System parameter in NIBittensorLLM is optional but you can set whatever you want to perform with modelllm_sys = NIBittensorLLM( system_prompt="Your task is to determine response based on user prompt.Explain me like I am technical lead of a project")sys_resp = llm_sys( "What is bittensor and What are the potential benefits of decentralized AI?")print(f"Response provided by LLM with system prompt set is : {sys_resp}")# The top_responses parameter can give multiple responses based on its parameter value# This below code retrive top 10 miner's response all the response are in format of json# Json response structure is""" { "choices": [ {"index": Bittensor's Metagraph index number, "uid": Unique Identifier of a miner, "responder_hotkey": Hotkey of a miner, "message":{"role":"assistant","content": Contains actual response}, "response_ms": Time in millisecond required to fetch response from a miner} ] } """multi_response_llm = NIBittensorLLM(top_responses=10)multi_resp = multi_response_llm("What is Neural Network Feeding Mechanism?")json_multi_resp = json.loads(multi_resp)pprint(json_multi_resp) ``` ## Using NIBittensorLLM with LLMChain and PromptTemplate[​](#using-nibittensorllm-with-llmchain-and-prompttemplate "Direct link to Using NIBittensorLLM with LLMChain and PromptTemplate") ``` from langchain.chains import LLMChainfrom langchain.globals import set_debugfrom langchain_community.llms import NIBittensorLLMfrom langchain_core.prompts import PromptTemplateset_debug(True)template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template)# System parameter in NIBittensorLLM is optional but you can set whatever you want to perform with modelllm = NIBittensorLLM( system_prompt="Your task is to determine response based on user prompt.")llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What is bittensor?"llm_chain.run(question) ``` ``` from langchain.tools import Toolfrom langchain_community.utilities import GoogleSearchAPIWrappersearch = GoogleSearchAPIWrapper()tool = Tool( name="Google Search", description="Search Google for recent results.", func=search.run,) ``` ``` from langchain import hubfrom langchain.agents import ( AgentExecutor, create_react_agent,)from langchain.memory import ConversationBufferMemoryfrom langchain_community.llms import NIBittensorLLMtools = [tool]prompt = hub.pull("hwchase17/react")llm = NIBittensorLLM( system_prompt="Your task is to determine a response based on user prompt")memory = ConversationBufferMemory(memory_key="chat_history")agent = create_react_agent(llm, tools, prompt)agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory)response = agent_executor.invoke({"input": prompt}) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:09.013Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/bittensor/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/bittensor/", "description": "Bittensor is a mining network, similar to", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"bittensor\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:08 GMT", "etag": "W/\"de1bfa923ba16524c2cf53deed9c016f\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::zmsw5-1713753608321-9f7edeb34005" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/bittensor/", "property": "og:url" }, { "content": "Bittensor | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Bittensor is a mining network, similar to", "property": "og:description" } ], "title": "Bittensor | 🦜️🔗 LangChain" }
Bittensor Bittensor is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge. NIBittensorLLM is developed by Neural Internet, powered by Bittensor. This LLM showcases true potential of decentralized AI by giving you the best response(s) from the Bittensor protocol, which consist of various AI models such as OpenAI, LLaMA2 etc. Users can view their logs, requests, and API keys on the Validator Endpoint Frontend. However, changes to the configuration are currently prohibited; otherwise, the user’s queries will be blocked. If you encounter any difficulties or have any questions, please feel free to reach out to our developer on GitHub, Discord or join our discord server for latest update and queries Neural Internet. Different Parameter and response handling for NIBittensorLLM​ import json from pprint import pprint from langchain.globals import set_debug from langchain_community.llms import NIBittensorLLM set_debug(True) # System parameter in NIBittensorLLM is optional but you can set whatever you want to perform with model llm_sys = NIBittensorLLM( system_prompt="Your task is to determine response based on user prompt.Explain me like I am technical lead of a project" ) sys_resp = llm_sys( "What is bittensor and What are the potential benefits of decentralized AI?" ) print(f"Response provided by LLM with system prompt set is : {sys_resp}") # The top_responses parameter can give multiple responses based on its parameter value # This below code retrive top 10 miner's response all the response are in format of json # Json response structure is """ { "choices": [ {"index": Bittensor's Metagraph index number, "uid": Unique Identifier of a miner, "responder_hotkey": Hotkey of a miner, "message":{"role":"assistant","content": Contains actual response}, "response_ms": Time in millisecond required to fetch response from a miner} ] } """ multi_response_llm = NIBittensorLLM(top_responses=10) multi_resp = multi_response_llm("What is Neural Network Feeding Mechanism?") json_multi_resp = json.loads(multi_resp) pprint(json_multi_resp) Using NIBittensorLLM with LLMChain and PromptTemplate​ from langchain.chains import LLMChain from langchain.globals import set_debug from langchain_community.llms import NIBittensorLLM from langchain_core.prompts import PromptTemplate set_debug(True) template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate.from_template(template) # System parameter in NIBittensorLLM is optional but you can set whatever you want to perform with model llm = NIBittensorLLM( system_prompt="Your task is to determine response based on user prompt." ) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What is bittensor?" llm_chain.run(question) from langchain.tools import Tool from langchain_community.utilities import GoogleSearchAPIWrapper search = GoogleSearchAPIWrapper() tool = Tool( name="Google Search", description="Search Google for recent results.", func=search.run, ) from langchain import hub from langchain.agents import ( AgentExecutor, create_react_agent, ) from langchain.memory import ConversationBufferMemory from langchain_community.llms import NIBittensorLLM tools = [tool] prompt = hub.pull("hwchase17/react") llm = NIBittensorLLM( system_prompt="Your task is to determine a response based on user prompt" ) memory = ConversationBufferMemory(memory_key="chat_history") agent = create_react_agent(llm, tools, prompt) agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory) response = agent_executor.invoke({"input": prompt})
https://python.langchain.com/docs/integrations/llms/chatglm/
## ChatGLM [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B) is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level). [ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B) is the second-generation version of the open-source bilingual (Chinese-English) chat model ChatGLM-6B. It retains the smooth conversation flow and low deployment threshold of the first-generation model, while introducing the new features like better performance, longer context and more efficient inference. [ChatGLM3](https://github.com/THUDM/ChatGLM3) is a new generation of pre-trained dialogue models jointly released by Zhipu AI and Tsinghua KEG. ChatGLM3-6B is the open-source model in the ChatGLM3 series ``` # Install required dependencies%pip install -qU langchain langchain-community ``` ## ChatGLM3[​](#chatglm3 "Direct link to ChatGLM3") This examples goes over how to use LangChain to interact with ChatGLM3-6B Inference for text completion. ``` from langchain.chains import LLMChainfrom langchain.schema.messages import AIMessagefrom langchain_community.llms.chatglm3 import ChatGLM3from langchain_core.prompts import PromptTemplate ``` ``` template = """{question}"""prompt = PromptTemplate.from_template(template) ``` ``` endpoint_url = "http://127.0.0.1:8000/v1/chat/completions"messages = [ AIMessage(content="我将从美国到中国来旅游,出行前希望了解中国的城市"), AIMessage(content="欢迎问我任何问题。"),]llm = ChatGLM3( endpoint_url=endpoint_url, max_tokens=80000, prefix_messages=messages, top_p=0.9,) ``` ``` llm_chain = LLMChain(prompt=prompt, llm=llm)question = "北京和上海两座城市有什么不同?"llm_chain.run(question) ``` ``` '北京和上海是中国两个不同的城市,它们在很多方面都有所不同。\n\n北京是中国的首都,也是历史悠久的城市之一。它有着丰富的历史文化遗产,如故宫、颐和园等,这些景点吸引着众多游客前来观光。北京也是一个政治、文化和教育中心,有很多政府机构和学术机构总部设在北京。\n\n上海则是一个现代化的城市,它是中国的经济中心之一。上海拥有许多高楼大厦和国际化的金融机构,是中国最国际化的城市之一。上海也是一个美食和购物天堂,有许多著名的餐厅和购物中心。\n\n北京和上海的气候也不同。北京属于温带大陆性气候,冬季寒冷干燥,夏季炎热多风;而上海属于亚热带季风气候,四季分明,春秋宜人。\n\n北京和上海有很多不同之处,但都是中国非常重要的城市,每个城市都有自己独特的魅力和特色。' ``` ## ChatGLM and ChatGLM2[​](#chatglm-and-chatglm2 "Direct link to ChatGLM and ChatGLM2") The following example shows how to use LangChain to interact with the ChatGLM2-6B Inference to complete text. ChatGLM-6B and ChatGLM2-6B has the same api specs, so this example should work with both. ``` from langchain.chains import LLMChainfrom langchain_community.llms import ChatGLMfrom langchain_core.prompts import PromptTemplate# import os ``` ``` template = """{question}"""prompt = PromptTemplate.from_template(template) ``` ``` # default endpoint_url for a local deployed ChatGLM api serverendpoint_url = "http://127.0.0.1:8000"# direct access endpoint in a proxied environment# os.environ['NO_PROXY'] = '127.0.0.1'llm = ChatGLM( endpoint_url=endpoint_url, max_token=80000, history=[ ["我将从美国到中国来旅游,出行前希望了解中国的城市", "欢迎问我任何问题。"] ], top_p=0.9, model_kwargs={"sample_model_args": False},)# turn on with_history only when you want the LLM object to keep track of the conversation history# and send the accumulated context to the backend model api, which make it stateful. By default it is stateless.# llm.with_history = True ``` ``` llm_chain = LLMChain(prompt=prompt, llm=llm) ``` ``` question = "北京和上海两座城市有什么不同?"llm_chain.run(question) ``` ``` ChatGLM payload: {'prompt': '北京和上海两座城市有什么不同?', 'temperature': 0.1, 'history': [['我将从美国到中国来旅游,出行前希望了解中国的城市', '欢迎问我任何问题。']], 'max_length': 80000, 'top_p': 0.9, 'sample_model_args': False} ``` ``` '北京和上海是中国的两个首都,它们在许多方面都有所不同。\n\n北京是中国的政治和文化中心,拥有悠久的历史和灿烂的文化。它是中国最重要的古都之一,也是中国历史上最后一个封建王朝的都城。北京有许多著名的古迹和景点,例如紫禁城、天安门广场和长城等。\n\n上海是中国最现代化的城市之一,也是中国商业和金融中心。上海拥有许多国际知名的企业和金融机构,同时也有许多著名的景点和美食。上海的外滩是一个历史悠久的商业区,拥有许多欧式建筑和餐馆。\n\n除此之外,北京和上海在交通和人口方面也有很大差异。北京是中国的首都,人口众多,交通拥堵问题较为严重。而上海是中国的商业和金融中心,人口密度较低,交通相对较为便利。\n\n总的来说,北京和上海是两个拥有独特魅力和特点的城市,可以根据自己的兴趣和时间来选择前往其中一座城市旅游。' ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:09.340Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/chatglm/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/chatglm/", "description": "ChatGLM-6B is an open bilingual", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "8778", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"chatglm\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:08 GMT", "etag": "W/\"89a213402409ec75e27f3f3b39fdd662\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::57h9m-1713753608543-f89dbe532856" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/chatglm/", "property": "og:url" }, { "content": "ChatGLM | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "ChatGLM-6B is an open bilingual", "property": "og:description" } ], "title": "ChatGLM | 🦜️🔗 LangChain" }
ChatGLM ChatGLM-6B is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level). ChatGLM2-6B is the second-generation version of the open-source bilingual (Chinese-English) chat model ChatGLM-6B. It retains the smooth conversation flow and low deployment threshold of the first-generation model, while introducing the new features like better performance, longer context and more efficient inference. ChatGLM3 is a new generation of pre-trained dialogue models jointly released by Zhipu AI and Tsinghua KEG. ChatGLM3-6B is the open-source model in the ChatGLM3 series # Install required dependencies %pip install -qU langchain langchain-community ChatGLM3​ This examples goes over how to use LangChain to interact with ChatGLM3-6B Inference for text completion. from langchain.chains import LLMChain from langchain.schema.messages import AIMessage from langchain_community.llms.chatglm3 import ChatGLM3 from langchain_core.prompts import PromptTemplate template = """{question}""" prompt = PromptTemplate.from_template(template) endpoint_url = "http://127.0.0.1:8000/v1/chat/completions" messages = [ AIMessage(content="我将从美国到中国来旅游,出行前希望了解中国的城市"), AIMessage(content="欢迎问我任何问题。"), ] llm = ChatGLM3( endpoint_url=endpoint_url, max_tokens=80000, prefix_messages=messages, top_p=0.9, ) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "北京和上海两座城市有什么不同?" llm_chain.run(question) '北京和上海是中国两个不同的城市,它们在很多方面都有所不同。\n\n北京是中国的首都,也是历史悠久的城市之一。它有着丰富的历史文化遗产,如故宫、颐和园等,这些景点吸引着众多游客前来观光。北京也是一个政治、文化和教育中心,有很多政府机构和学术机构总部设在北京。\n\n上海则是一个现代化的城市,它是中国的经济中心之一。上海拥有许多高楼大厦和国际化的金融机构,是中国最国际化的城市之一。上海也是一个美食和购物天堂,有许多著名的餐厅和购物中心。\n\n北京和上海的气候也不同。北京属于温带大陆性气候,冬季寒冷干燥,夏季炎热多风;而上海属于亚热带季风气候,四季分明,春秋宜人。\n\n北京和上海有很多不同之处,但都是中国非常重要的城市,每个城市都有自己独特的魅力和特色。' ChatGLM and ChatGLM2​ The following example shows how to use LangChain to interact with the ChatGLM2-6B Inference to complete text. ChatGLM-6B and ChatGLM2-6B has the same api specs, so this example should work with both. from langchain.chains import LLMChain from langchain_community.llms import ChatGLM from langchain_core.prompts import PromptTemplate # import os template = """{question}""" prompt = PromptTemplate.from_template(template) # default endpoint_url for a local deployed ChatGLM api server endpoint_url = "http://127.0.0.1:8000" # direct access endpoint in a proxied environment # os.environ['NO_PROXY'] = '127.0.0.1' llm = ChatGLM( endpoint_url=endpoint_url, max_token=80000, history=[ ["我将从美国到中国来旅游,出行前希望了解中国的城市", "欢迎问我任何问题。"] ], top_p=0.9, model_kwargs={"sample_model_args": False}, ) # turn on with_history only when you want the LLM object to keep track of the conversation history # and send the accumulated context to the backend model api, which make it stateful. By default it is stateless. # llm.with_history = True llm_chain = LLMChain(prompt=prompt, llm=llm) question = "北京和上海两座城市有什么不同?" llm_chain.run(question) ChatGLM payload: {'prompt': '北京和上海两座城市有什么不同?', 'temperature': 0.1, 'history': [['我将从美国到中国来旅游,出行前希望了解中国的城市', '欢迎问我任何问题。']], 'max_length': 80000, 'top_p': 0.9, 'sample_model_args': False} '北京和上海是中国的两个首都,它们在许多方面都有所不同。\n\n北京是中国的政治和文化中心,拥有悠久的历史和灿烂的文化。它是中国最重要的古都之一,也是中国历史上最后一个封建王朝的都城。北京有许多著名的古迹和景点,例如紫禁城、天安门广场和长城等。\n\n上海是中国最现代化的城市之一,也是中国商业和金融中心。上海拥有许多国际知名的企业和金融机构,同时也有许多著名的景点和美食。上海的外滩是一个历史悠久的商业区,拥有许多欧式建筑和餐馆。\n\n除此之外,北京和上海在交通和人口方面也有很大差异。北京是中国的首都,人口众多,交通拥堵问题较为严重。而上海是中国的商业和金融中心,人口密度较低,交通相对较为便利。\n\n总的来说,北京和上海是两个拥有独特魅力和特点的城市,可以根据自己的兴趣和时间来选择前往其中一座城市旅游。'
https://python.langchain.com/docs/integrations/llms/cerebriumai/
## CerebriumAI `Cerebrium` is an AWS Sagemaker alternative. It also provides API access to [several LLM models](https://docs.cerebrium.ai/cerebrium/prebuilt-models/deployment). This notebook goes over how to use Langchain with [CerebriumAI](https://docs.cerebrium.ai/introduction). ## Install cerebrium[​](#install-cerebrium "Direct link to Install cerebrium") The `cerebrium` package is required to use the `CerebriumAI` API. Install `cerebrium` using `pip3 install cerebrium`. ``` # Install the package!pip3 install cerebrium ``` ## Imports[​](#imports "Direct link to Imports") ``` import osfrom langchain.chains import LLMChainfrom langchain_community.llms import CerebriumAIfrom langchain_core.prompts import PromptTemplate ``` ## Set the Environment API Key[​](#set-the-environment-api-key "Direct link to Set the Environment API Key") Make sure to get your API key from CerebriumAI. See [here](https://dashboard.cerebrium.ai/login). You are given a 1 hour free of serverless GPU compute to test different models. ``` os.environ["CEREBRIUMAI_API_KEY"] = "YOUR_KEY_HERE" ``` ## Create the CerebriumAI instance[​](#create-the-cerebriumai-instance "Direct link to Create the CerebriumAI instance") You can specify different parameters such as the model endpoint url, max length, temperature, etc. You must provide an endpoint url. ``` llm = CerebriumAI(endpoint_url="YOUR ENDPOINT URL HERE") ``` ## Create a Prompt Template[​](#create-a-prompt-template "Direct link to Create a Prompt Template") We will create a prompt template for Question and Answer. ``` template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template) ``` ## Initiate the LLMChain[​](#initiate-the-llmchain "Direct link to Initiate the LLMChain") ``` llm_chain = LLMChain(prompt=prompt, llm=llm) ``` ## Run the LLMChain[​](#run-the-llmchain "Direct link to Run the LLMChain") Provide a question and run the LLMChain. ``` question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:09.580Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/cerebriumai/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/cerebriumai/", "description": "Cerebrium is an AWS Sagemaker alternative. It also provides API access", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3497", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"cerebriumai\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:08 GMT", "etag": "W/\"bd5e4a9466352dc1f9b5a1eec8cf2597\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::jq5s4-1713753608722-5f1f2df2648d" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/cerebriumai/", "property": "og:url" }, { "content": "CerebriumAI | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Cerebrium is an AWS Sagemaker alternative. It also provides API access", "property": "og:description" } ], "title": "CerebriumAI | 🦜️🔗 LangChain" }
CerebriumAI Cerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models. This notebook goes over how to use Langchain with CerebriumAI. Install cerebrium​ The cerebrium package is required to use the CerebriumAI API. Install cerebrium using pip3 install cerebrium. # Install the package !pip3 install cerebrium Imports​ import os from langchain.chains import LLMChain from langchain_community.llms import CerebriumAI from langchain_core.prompts import PromptTemplate Set the Environment API Key​ Make sure to get your API key from CerebriumAI. See here. You are given a 1 hour free of serverless GPU compute to test different models. os.environ["CEREBRIUMAI_API_KEY"] = "YOUR_KEY_HERE" Create the CerebriumAI instance​ You can specify different parameters such as the model endpoint url, max length, temperature, etc. You must provide an endpoint url. llm = CerebriumAI(endpoint_url="YOUR ENDPOINT URL HERE") Create a Prompt Template​ We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate.from_template(template) Initiate the LLMChain​ llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain​ Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/llms/clarifai/
## Clarifai > [Clarifai](https://www.clarifai.com/) is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference. This example goes over how to use LangChain to interact with `Clarifai` [models](https://clarifai.com/explore/models). To use Clarifai, you must have an account and a Personal Access Token (PAT) key. [Check here](https://clarifai.com/settings/security) to get or create a PAT. ## Dependencies ``` # Install required dependencies%pip install --upgrade --quiet clarifai ``` ``` # Declare clarifai pat token as environment variable or you can pass it as argument in clarifai class.import osos.environ["CLARIFAI_PAT"] = "CLARIFAI_PAT_TOKEN" ``` ## Imports Here we will be setting the personal access token. You can find your PAT under [settings/security](https://clarifai.com/settings/security) in your Clarifai account. ``` # Please login and get your API key from https://clarifai.com/settings/securityfrom getpass import getpassCLARIFAI_PAT = getpass() ``` ``` # Import the required modulesfrom langchain.chains import LLMChainfrom langchain_community.llms import Clarifaifrom langchain_core.prompts import PromptTemplate ``` ## Input Create a prompt template to be used with the LLM Chain: ``` template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template) ``` ## Setup Setup the user id and app id where the model resides. You can find a list of public models on [https://clarifai.com/explore/models](https://clarifai.com/explore/models) You will have to also initialize the model id and if needed, the model version id. Some models have many versions, you can choose the one appropriate for your task. Alternatively, You can use the model\_url (for ex: “[https://clarifai.com/anthropic/completion/models/claude-v2”](https://clarifai.com/anthropic/completion/models/claude-v2%E2%80%9D)) for intialization. ``` USER_ID = "openai"APP_ID = "chat-completion"MODEL_ID = "GPT-3_5-turbo"# You can provide a specific model version as the model_version_id arg.# MODEL_VERSION_ID = "MODEL_VERSION_ID"# orMODEL_URL = "https://clarifai.com/openai/chat-completion/models/GPT-4" ``` ``` # Initialize a Clarifai LLMclarifai_llm = Clarifai(user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)# or# Initialize through Model URLclarifai_llm = Clarifai(model_url=MODEL_URL) ``` ``` # Create LLM chainllm_chain = LLMChain(prompt=prompt, llm=clarifai_llm) ``` ## Run Chain ``` question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) ``` ``` ' Okay, here are the steps to figure this out:\n\n1. Justin Bieber was born on March 1, 1994.\n\n2. The Super Bowl that took place in the year of his birth was Super Bowl XXVIII. \n\n3. Super Bowl XXVIII was played on January 30, 1994.\n\n4. The two teams that played in Super Bowl XXVIII were the Dallas Cowboys and the Buffalo Bills. \n\n5. The Dallas Cowboys defeated the Buffalo Bills 30-13 to win Super Bowl XXVIII.\n\nTherefore, the NFL team that won the Super Bowl in the year Justin Bieber was born was the Dallas Cowboys.' ``` ## Model Predict with Inference parameters for GPT.[​](#model-predict-with-inference-parameters-for-gpt. "Direct link to Model Predict with Inference parameters for GPT.") Alternatively you can use GPT models with inference parameters (like temperature, max\_tokens etc) ``` # Intialize the parameters as dict.params = dict(temperature=str(0.3), max_tokens=100) ``` ``` clarifai_llm = Clarifai(user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)llm_chain = LLMChain( prompt=prompt, llm=clarifai_llm, llm_kwargs={"inference_params": params}) ``` ``` question = "How many 3 digit even numbers you can form that if one of the digits is 5 then the following digit must be 7?"llm_chain.run(question) ``` ``` 'Step 1: The first digit can be any even number from 1 to 9, except for 5. So there are 4 choices for the first digit.\n\nStep 2: If the first digit is not 5, then the second digit must be 7. So there is only 1 choice for the second digit.\n\nStep 3: The third digit can be any even number from 0 to 9, except for 5 and 7. So there are ' ``` Generate responses for list of prompts ``` # We can use _generate to generate the response for list of prompts.clarifai_llm._generate( [ "Help me summarize the events of american revolution in 5 sentences", "Explain about rocket science in a funny way", "Create a script for welcome speech for the college sports day", ], inference_params=params,) ``` ``` LLMResult(generations=[[Generation(text=' Here is a 5 sentence summary of the key events of the American Revolution:\n\nThe American Revolution began with growing tensions between American colonists and the British government over issues of taxation without representation. In 1775, fighting broke out between British troops and American militiamen in Lexington and Concord, starting the Revolutionary War. The Continental Congress appointed George Washington as commander of the Continental Army, which went on to win key victories over the British. In 1776, the Declaration of Independence was adopted, formally declaring the 13 American colonies free from British rule. After years of fighting, the Revolutionary War ended with the British defeat at Yorktown in 1781 and recognition of American independence.')], [Generation(text=" Here's a humorous take on explaining rocket science:\n\nRocket science is so easy, it's practically child's play! Just strap a big metal tube full of explosive liquid to your butt and light the fuse. What could go wrong? Blastoff! Whoosh, you'll be zooming to the moon in no time. Just remember your helmet or your head might go pop like a zit when you leave the atmosphere. \n\nMaking rockets is a cinch too. Simply mix together some spicy spices, garlic powder, chili powder, a dash of gunpowder and voila - rocket fuel! Add a pinch of baking soda and vinegar if you want an extra kick. Shake well and pour into your DIY soda bottle rocket. Stand back and watch that baby soar!\n\nGuiding a rocket is fun for the whole family. Just strap in, push some random buttons and see where you end up. It's like the ultimate surprise vacation! You never know if you'll wind up on Venus, crash land on Mars, or take a quick dip through the rings of Saturn. \n\nAnd if anything goes wrong, don't sweat it. Rocket science is easy breezy. Just troubleshoot on the fly with some duct tape and crazy glue and you'll be back on course in a jiffy. Who needs mission control when you've got this!")], [Generation(text=" Here is a draft welcome speech for a college sports day:\n\nGood morning everyone and welcome to our college's annual sports day! It's wonderful to see so many students, faculty, staff, alumni, and guests gathered here today to celebrate sportsmanship and athletic achievement at our college. \n\nLet's begin by thanking all the organizers, volunteers, coaches, and staff members who worked tirelessly behind the scenes to make this event possible. Our sports day would not happen without your dedication and commitment. \n\nI also want to recognize all the student-athletes with us today. You inspire us with your talent, spirit, and determination. Sports have a unique power to unite and energize our community. Through both individual and team sports, you demonstrate focus, collaboration, perseverance and resilience – qualities that will serve you well both on and off the field.\n\nThe spirit of competition and fair play are core values of any sports event. I encourage all of you to compete enthusiastically today. Play to the best of your ability and have fun. Applaud the effort and sportsmanship of your fellow athletes, regardless of the outcome. \n\nWin or lose, this sports day is a day for us to build camaraderie and create lifelong memories. Let's make it a day of fitness and friendship for all. With that, let the games begin. Enjoy the day!")]], llm_output=None, run=None) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:09.903Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/clarifai/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/clarifai/", "description": "Clarifai is an AI Platform that provides", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4425", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"clarifai\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:08 GMT", "etag": "W/\"9bee0b09d42372ce7d64e1e2df21e9a9\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::wv8xj-1713753608731-98b08ae4abd8" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/clarifai/", "property": "og:url" }, { "content": "Clarifai | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Clarifai is an AI Platform that provides", "property": "og:description" } ], "title": "Clarifai | 🦜️🔗 LangChain" }
Clarifai Clarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference. This example goes over how to use LangChain to interact with Clarifai models. To use Clarifai, you must have an account and a Personal Access Token (PAT) key. Check here to get or create a PAT. Dependencies # Install required dependencies %pip install --upgrade --quiet clarifai # Declare clarifai pat token as environment variable or you can pass it as argument in clarifai class. import os os.environ["CLARIFAI_PAT"] = "CLARIFAI_PAT_TOKEN" Imports Here we will be setting the personal access token. You can find your PAT under settings/security in your Clarifai account. # Please login and get your API key from https://clarifai.com/settings/security from getpass import getpass CLARIFAI_PAT = getpass() # Import the required modules from langchain.chains import LLMChain from langchain_community.llms import Clarifai from langchain_core.prompts import PromptTemplate Input Create a prompt template to be used with the LLM Chain: template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate.from_template(template) Setup Setup the user id and app id where the model resides. You can find a list of public models on https://clarifai.com/explore/models You will have to also initialize the model id and if needed, the model version id. Some models have many versions, you can choose the one appropriate for your task. Alternatively, You can use the model_url (for ex: “https://clarifai.com/anthropic/completion/models/claude-v2”) for intialization. USER_ID = "openai" APP_ID = "chat-completion" MODEL_ID = "GPT-3_5-turbo" # You can provide a specific model version as the model_version_id arg. # MODEL_VERSION_ID = "MODEL_VERSION_ID" # or MODEL_URL = "https://clarifai.com/openai/chat-completion/models/GPT-4" # Initialize a Clarifai LLM clarifai_llm = Clarifai(user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID) # or # Initialize through Model URL clarifai_llm = Clarifai(model_url=MODEL_URL) # Create LLM chain llm_chain = LLMChain(prompt=prompt, llm=clarifai_llm) Run Chain question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) ' Okay, here are the steps to figure this out:\n\n1. Justin Bieber was born on March 1, 1994.\n\n2. The Super Bowl that took place in the year of his birth was Super Bowl XXVIII. \n\n3. Super Bowl XXVIII was played on January 30, 1994.\n\n4. The two teams that played in Super Bowl XXVIII were the Dallas Cowboys and the Buffalo Bills. \n\n5. The Dallas Cowboys defeated the Buffalo Bills 30-13 to win Super Bowl XXVIII.\n\nTherefore, the NFL team that won the Super Bowl in the year Justin Bieber was born was the Dallas Cowboys.' Model Predict with Inference parameters for GPT.​ Alternatively you can use GPT models with inference parameters (like temperature, max_tokens etc) # Intialize the parameters as dict. params = dict(temperature=str(0.3), max_tokens=100) clarifai_llm = Clarifai(user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID) llm_chain = LLMChain( prompt=prompt, llm=clarifai_llm, llm_kwargs={"inference_params": params} ) question = "How many 3 digit even numbers you can form that if one of the digits is 5 then the following digit must be 7?" llm_chain.run(question) 'Step 1: The first digit can be any even number from 1 to 9, except for 5. So there are 4 choices for the first digit.\n\nStep 2: If the first digit is not 5, then the second digit must be 7. So there is only 1 choice for the second digit.\n\nStep 3: The third digit can be any even number from 0 to 9, except for 5 and 7. So there are ' Generate responses for list of prompts # We can use _generate to generate the response for list of prompts. clarifai_llm._generate( [ "Help me summarize the events of american revolution in 5 sentences", "Explain about rocket science in a funny way", "Create a script for welcome speech for the college sports day", ], inference_params=params, ) LLMResult(generations=[[Generation(text=' Here is a 5 sentence summary of the key events of the American Revolution:\n\nThe American Revolution began with growing tensions between American colonists and the British government over issues of taxation without representation. In 1775, fighting broke out between British troops and American militiamen in Lexington and Concord, starting the Revolutionary War. The Continental Congress appointed George Washington as commander of the Continental Army, which went on to win key victories over the British. In 1776, the Declaration of Independence was adopted, formally declaring the 13 American colonies free from British rule. After years of fighting, the Revolutionary War ended with the British defeat at Yorktown in 1781 and recognition of American independence.')], [Generation(text=" Here's a humorous take on explaining rocket science:\n\nRocket science is so easy, it's practically child's play! Just strap a big metal tube full of explosive liquid to your butt and light the fuse. What could go wrong? Blastoff! Whoosh, you'll be zooming to the moon in no time. Just remember your helmet or your head might go pop like a zit when you leave the atmosphere. \n\nMaking rockets is a cinch too. Simply mix together some spicy spices, garlic powder, chili powder, a dash of gunpowder and voila - rocket fuel! Add a pinch of baking soda and vinegar if you want an extra kick. Shake well and pour into your DIY soda bottle rocket. Stand back and watch that baby soar!\n\nGuiding a rocket is fun for the whole family. Just strap in, push some random buttons and see where you end up. It's like the ultimate surprise vacation! You never know if you'll wind up on Venus, crash land on Mars, or take a quick dip through the rings of Saturn. \n\nAnd if anything goes wrong, don't sweat it. Rocket science is easy breezy. Just troubleshoot on the fly with some duct tape and crazy glue and you'll be back on course in a jiffy. Who needs mission control when you've got this!")], [Generation(text=" Here is a draft welcome speech for a college sports day:\n\nGood morning everyone and welcome to our college's annual sports day! It's wonderful to see so many students, faculty, staff, alumni, and guests gathered here today to celebrate sportsmanship and athletic achievement at our college. \n\nLet's begin by thanking all the organizers, volunteers, coaches, and staff members who worked tirelessly behind the scenes to make this event possible. Our sports day would not happen without your dedication and commitment. \n\nI also want to recognize all the student-athletes with us today. You inspire us with your talent, spirit, and determination. Sports have a unique power to unite and energize our community. Through both individual and team sports, you demonstrate focus, collaboration, perseverance and resilience – qualities that will serve you well both on and off the field.\n\nThe spirit of competition and fair play are core values of any sports event. I encourage all of you to compete enthusiastically today. Play to the best of your ability and have fun. Applaud the effort and sportsmanship of your fellow athletes, regardless of the outcome. \n\nWin or lose, this sports day is a day for us to build camaraderie and create lifelong memories. Let's make it a day of fitness and friendship for all. With that, let the games begin. Enjoy the day!")]], llm_output=None, run=None)
https://python.langchain.com/docs/integrations/llms/cohere/
## Cohere > [Cohere](https://cohere.ai/about) is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. Head to the [API reference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.cohere.Cohere.html) for detailed documentation of all attributes and methods. ## Setup[​](#setup "Direct link to Setup") The integration lives in the `langchain-community` package. We also need to install the `cohere` package itself. We can install these with: ``` pip install -U langchain-community langchain-cohere ``` We’ll also need to get a [Cohere API key](https://cohere.com/) and set the `COHERE_API_KEY` environment variable: ``` import getpassimport osos.environ["COHERE_API_KEY"] = getpass.getpass() ``` It’s also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability ``` # os.environ["LANGCHAIN_TRACING_V2"] = "true"# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass() ``` ## Usage[​](#usage "Direct link to Usage") Cohere supports all [LLM](https://python.langchain.com/docs/modules/model_io/llms/) functionality: ``` from langchain_cohere import Coherefrom langchain_core.messages import HumanMessage ``` ``` model = Cohere(model="command", max_tokens=256, temperature=0.75) ``` ``` message = "Knock knock"model.invoke(message) ``` ``` await model.ainvoke(message) ``` ``` for chunk in model.stream(message): print(chunk, end="", flush=True) ``` You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](https://python.langchain.com/docs/expression_language/) ``` from langchain_core.prompts import PromptTemplateprompt = PromptTemplate.from_template("Tell me a joke about {topic}")chain = prompt | model ``` ``` chain.invoke({"topic": "bears"}) ``` ``` ' Why did the teddy bear cross the road?\nBecause he had bear crossings.\n\nWould you like to hear another joke? ' ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:10.231Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/cohere/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/cohere/", "description": "Cohere is a Canadian startup that provides", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "8210", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"cohere\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:09 GMT", "etag": "W/\"5b20892f1bdf42a371df185add8cf9c9\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::tghr7-1713753609704-b6f287dc10dd" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/cohere/", "property": "og:url" }, { "content": "Cohere | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Cohere is a Canadian startup that provides", "property": "og:description" } ], "title": "Cohere | 🦜️🔗 LangChain" }
Cohere Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. Head to the API reference for detailed documentation of all attributes and methods. Setup​ The integration lives in the langchain-community package. We also need to install the cohere package itself. We can install these with: pip install -U langchain-community langchain-cohere We’ll also need to get a Cohere API key and set the COHERE_API_KEY environment variable: import getpass import os os.environ["COHERE_API_KEY"] = getpass.getpass() It’s also helpful (but not needed) to set up LangSmith for best-in-class observability # os.environ["LANGCHAIN_TRACING_V2"] = "true" # os.environ["LANGCHAIN_API_KEY"] = getpass.getpass() Usage​ Cohere supports all LLM functionality: from langchain_cohere import Cohere from langchain_core.messages import HumanMessage model = Cohere(model="command", max_tokens=256, temperature=0.75) message = "Knock knock" model.invoke(message) await model.ainvoke(message) for chunk in model.stream(message): print(chunk, end="", flush=True) You can also easily combine with a prompt template for easy structuring of user input. We can do this using LCEL from langchain_core.prompts import PromptTemplate prompt = PromptTemplate.from_template("Tell me a joke about {topic}") chain = prompt | model chain.invoke({"topic": "bears"}) ' Why did the teddy bear cross the road?\nBecause he had bear crossings.\n\nWould you like to hear another joke? '
https://python.langchain.com/docs/integrations/llms/cloudflare_workersai/
## Cloudflare Workers AI [Cloudflare AI documentation](https://developers.cloudflare.com/workers-ai/models/text-generation/) listed all generative text models available. Both Cloudflare account ID and API token are required. Find how to obtain them from [this document](https://developers.cloudflare.com/workers-ai/get-started/rest-api/). ``` from langchain.chains import LLMChainfrom langchain_community.llms.cloudflare_workersai import CloudflareWorkersAIfrom langchain_core.prompts import PromptTemplatetemplate = """Human: {question}AI Assistant: """prompt = PromptTemplate.from_template(template) ``` Get authentication before running LLM. ``` import getpassmy_account_id = getpass.getpass("Enter your Cloudflare account ID:\n\n")my_api_token = getpass.getpass("Enter your Cloudflare API token:\n\n")llm = CloudflareWorkersAI(account_id=my_account_id, api_token=my_api_token) ``` ``` llm_chain = LLMChain(prompt=prompt, llm=llm)question = "Why are roses red?"llm_chain.run(question) ``` ``` "AI Assistant: Ah, a fascinating question! The answer to why roses are red is a bit complex, but I'll do my best to explain it in a simple and polite manner.\nRoses are red due to the presence of a pigment called anthocyanin. Anthocyanin is a type of flavonoid, a class of plant compounds that are responsible for the red, purple, and blue colors found in many fruits and vegetables.\nNow, you might be wondering why roses specifically have this pigment. The answer lies in the evolutionary history of roses. You see, roses have been around for millions of years, and their red color has likely played a crucial role in attracting pollinators like bees and butterflies. These pollinators are drawn to the bright colors of roses, which helps the plants reproduce and spread their seeds.\nSo, to summarize, the reason roses are red is because of the anthocyanin pigment, which is a result of millions of years of evolutionary pressures shaping the plant's coloration to attract pollinators. I hope that helps clarify things for" ``` ``` # Using streamingfor chunk in llm.stream("Why is sky blue?"): print(chunk, end=" | ", flush=True) ``` ``` Ah | , | a | most | excellent | question | , | my | dear | human | ! | * | ad | just | s | glass | es | * | The | sky | appears | blue | due | to | a | phenomen | on | known | as | Ray | le | igh | scatter | ing | . | When | sun | light | enters | Earth | ' | s | atmosphere | , | it | enc | oun | ters | tiny | mole | cules | of | g | ases | such | as | nit | ro | gen | and | o | xygen | . | These | mole | cules | scatter | the | light | in | all | directions | , | but | they | scatter | shorter | ( | blue | ) | w | avel | ength | s | more | than | longer | ( | red | ) | w | avel | ength | s | . | This | is | known | as | Ray | le | igh | scatter | ing | . | | As | a | result | , | the | blue | light | is | dispers | ed | throughout | the | atmosphere | , | giving | the | sky | its | characteristic | blue | h | ue | . | The | blue | light | appears | more | prominent | during | sun | r | ise | and | sun | set | due | to | the | scatter | ing | of | light | by | the | Earth | ' | s | atmosphere | at | these | times | . | | I | hope | this | explanation | has | been | helpful | , | my | dear | human | ! | Is | there | anything | else | you | would | like | to | know | ? | * | sm | iles | * | * | | ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:10.415Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/cloudflare_workersai/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/cloudflare_workersai/", "description": "[Cloudflare AI", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3498", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"cloudflare_workersai\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:10 GMT", "etag": "W/\"84dbb09f9e7efc0db1cd6b57013a0097\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::757mv-1713753609993-999716b78c7c" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/cloudflare_workersai/", "property": "og:url" }, { "content": "Cloudflare Workers AI | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "[Cloudflare AI", "property": "og:description" } ], "title": "Cloudflare Workers AI | 🦜️🔗 LangChain" }
Cloudflare Workers AI Cloudflare AI documentation listed all generative text models available. Both Cloudflare account ID and API token are required. Find how to obtain them from this document. from langchain.chains import LLMChain from langchain_community.llms.cloudflare_workersai import CloudflareWorkersAI from langchain_core.prompts import PromptTemplate template = """Human: {question} AI Assistant: """ prompt = PromptTemplate.from_template(template) Get authentication before running LLM. import getpass my_account_id = getpass.getpass("Enter your Cloudflare account ID:\n\n") my_api_token = getpass.getpass("Enter your Cloudflare API token:\n\n") llm = CloudflareWorkersAI(account_id=my_account_id, api_token=my_api_token) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "Why are roses red?" llm_chain.run(question) "AI Assistant: Ah, a fascinating question! The answer to why roses are red is a bit complex, but I'll do my best to explain it in a simple and polite manner.\nRoses are red due to the presence of a pigment called anthocyanin. Anthocyanin is a type of flavonoid, a class of plant compounds that are responsible for the red, purple, and blue colors found in many fruits and vegetables.\nNow, you might be wondering why roses specifically have this pigment. The answer lies in the evolutionary history of roses. You see, roses have been around for millions of years, and their red color has likely played a crucial role in attracting pollinators like bees and butterflies. These pollinators are drawn to the bright colors of roses, which helps the plants reproduce and spread their seeds.\nSo, to summarize, the reason roses are red is because of the anthocyanin pigment, which is a result of millions of years of evolutionary pressures shaping the plant's coloration to attract pollinators. I hope that helps clarify things for" # Using streaming for chunk in llm.stream("Why is sky blue?"): print(chunk, end=" | ", flush=True) Ah | , | a | most | excellent | question | , | my | dear | human | ! | * | ad | just | s | glass | es | * | The | sky | appears | blue | due | to | a | phenomen | on | known | as | Ray | le | igh | scatter | ing | . | When | sun | light | enters | Earth | ' | s | atmosphere | , | it | enc | oun | ters | tiny | mole | cules | of | g | ases | such | as | nit | ro | gen | and | o | xygen | . | These | mole | cules | scatter | the | light | in | all | directions | , | but | they | scatter | shorter | ( | blue | ) | w | avel | ength | s | more | than | longer | ( | red | ) | w | avel | ength | s | . | This | is | known | as | Ray | le | igh | scatter | ing | . | | As | a | result | , | the | blue | light | is | dispers | ed | throughout | the | atmosphere | , | giving | the | sky | its | characteristic | blue | h | ue | . | The | blue | light | appears | more | prominent | during | sun | r | ise | and | sun | set | due | to | the | scatter | ing | of | light | by | the | Earth | ' | s | atmosphere | at | these | times | . | | I | hope | this | explanation | has | been | helpful | , | my | dear | human | ! | Is | there | anything | else | you | would | like | to | know | ? | * | sm | iles | * | * | | Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/llms/ctransformers/
## C Transformers The [C Transformers](https://github.com/marella/ctransformers) library provides Python bindings for GGML models. This example goes over how to use LangChain to interact with `C Transformers` [models](https://github.com/marella/ctransformers#supported-models). **Install** ``` %pip install --upgrade --quiet ctransformers ``` **Load Model** ``` from langchain_community.llms import CTransformersllm = CTransformers(model="marella/gpt-2-ggml") ``` **Generate Text** ``` print(llm("AI is going to")) ``` **Streaming** ``` from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = CTransformers( model="marella/gpt-2-ggml", callbacks=[StreamingStdOutCallbackHandler()])response = llm("AI is going to") ``` **LLMChain** ``` from langchain.chains import LLMChainfrom langchain_core.prompts import PromptTemplatetemplate = """Question: {question}Answer:"""prompt = PromptTemplate.from_template(template)llm_chain = LLMChain(prompt=prompt, llm=llm)response = llm_chain.run("What is AI?") ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:10.648Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/ctransformers/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/ctransformers/", "description": "The C Transformers library", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"ctransformers\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:10 GMT", "etag": "W/\"f03908c9427fba11397f03a2a8eea453\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::8fxw7-1713753610399-c33ad4964434" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/ctransformers/", "property": "og:url" }, { "content": "C Transformers | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "The C Transformers library", "property": "og:description" } ], "title": "C Transformers | 🦜️🔗 LangChain" }
C Transformers The C Transformers library provides Python bindings for GGML models. This example goes over how to use LangChain to interact with C Transformers models. Install %pip install --upgrade --quiet ctransformers Load Model from langchain_community.llms import CTransformers llm = CTransformers(model="marella/gpt-2-ggml") Generate Text print(llm("AI is going to")) Streaming from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler llm = CTransformers( model="marella/gpt-2-ggml", callbacks=[StreamingStdOutCallbackHandler()] ) response = llm("AI is going to") LLMChain from langchain.chains import LLMChain from langchain_core.prompts import PromptTemplate template = """Question: {question} Answer:""" prompt = PromptTemplate.from_template(template) llm_chain = LLMChain(prompt=prompt, llm=llm) response = llm_chain.run("What is AI?") Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/llms/ctranslate2/
## CTranslate2 **CTranslate2** is a C++ and Python library for efficient inference with Transformer models. The project implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. Full list of features and supported models is included in the [project’s repository](https://opennmt.net/CTranslate2/guides/transformers.html). To start, please check out the official [quickstart guide](https://opennmt.net/CTranslate2/quickstart.html). To use, you should have `ctranslate2` python package installed. ``` %pip install --upgrade --quiet ctranslate2 ``` To use a Hugging Face model with CTranslate2, it has to be first converted to CTranslate2 format using the `ct2-transformers-converter` command. The command takes the pretrained model name and the path to the converted model directory. ``` # conversation can take several minutes!ct2-transformers-converter --model meta-llama/Llama-2-7b-hf --quantization bfloat16 --output_dir ./llama-2-7b-ct2 --force ``` ``` Loading checkpoint shards: 100%|██████████████████| 2/2 [00:01<00:00, 1.81it/s] ``` ``` from langchain_community.llms import CTranslate2llm = CTranslate2( # output_dir from above: model_path="./llama-2-7b-ct2", tokenizer_name="meta-llama/Llama-2-7b-hf", device="cuda", # device_index can be either single int or list or ints, # indicating the ids of GPUs to use for inference: device_index=[0, 1], compute_type="bfloat16",) ``` ## Single call[​](#single-call "Direct link to Single call") ``` print( llm( "He presented me with plausible evidence for the existence of unicorns: ", max_length=256, sampling_topk=50, sampling_temperature=0.2, repetition_penalty=2, cache_static_prompt=False, )) ``` ``` He presented me with plausible evidence for the existence of unicorns: 1) they are mentioned in ancient texts; and, more importantly to him (and not so much as a matter that would convince most people), he had seen one.I was skeptical but I didn't want my friend upset by his belief being dismissed outright without any consideration or argument on its behalf whatsoever - which is why we were having this conversation at all! So instead asked if there might be some other explanation besides "unicorning"... maybe it could have been an ostrich? Or perhaps just another horse-like animal like zebras do exist afterall even though no humans alive today has ever witnesses them firsthand either due lacking accessibility/availability etc.. But then again those animals aren’ t exactly known around here anyway…” And thus began our discussion about whether these creatures actually existed anywhere else outside Earth itself where only few scientists ventured before us nowadays because technology allows exploration beyond borders once thought impossible centuries ago when travel meant walking everywhere yourself until reaching destination point A->B via footsteps alone unless someone helped guide along way through woods full darkness nighttime hours ``` ## Multiple calls:[​](#multiple-calls "Direct link to Multiple calls:") ``` print( llm.generate( ["The list of top romantic songs:\n1.", "The list of top rap songs:\n1."], max_length=128, )) ``` ``` generations=[[Generation(text='The list of top romantic songs:\n1. “I Will Always Love You” by Whitney Houston\n2. “Can’t Help Falling in Love” by Elvis Presley\n3. “Unchained Melody” by The Righteous Brothers\n4. “I Will Always Love You” by Dolly Parton\n5. “I Will Always Love You” by Whitney Houston\n6. “I Will Always Love You” by Dolly Parton\n7. “I Will Always Love You” by The Beatles\n8. “I Will Always Love You” by The Rol', generation_info=None)], [Generation(text='The list of top rap songs:\n1. “God’s Plan” by Drake\n2. “Rockstar” by Post Malone\n3. “Bad and Boujee” by Migos\n4. “Humble” by Kendrick Lamar\n5. “Bodak Yellow” by Cardi B\n6. “I’m the One” by DJ Khaled\n7. “Motorsport” by Migos\n8. “No Limit” by G-Eazy\n9. “Bounce Back” by Big Sean\n10. “', generation_info=None)]] llm_output=None run=[RunInfo(run_id=UUID('628e0491-a310-4d12-81db-6f2c5309d5c2')), RunInfo(run_id=UUID('f88fdbcd-c1f6-4f13-b575-810b80ecbaaf'))] ``` ## Integrate the model in an LLMChain[​](#integrate-the-model-in-an-llmchain "Direct link to Integrate the model in an LLMChain") ``` from langchain.chains import LLMChainfrom langchain_core.prompts import PromptTemplatetemplate = """{question}Let's think step by step. """prompt = PromptTemplate.from_template(template)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "Who was the US president in the year the first Pokemon game was released?"print(llm_chain.run(question)) ``` ``` Who was the US president in the year the first Pokemon game was released?Let's think step by step. 1996 was the year the first Pokemon game was released.\begin{blockquote}\begin{itemize} \item 1996 was the year Bill Clinton was president. \item 1996 was the year the first Pokemon game was released. \item 1996 was the year the first Pokemon game was released.\end{itemize}\end{blockquote}I'm not sure if this is a valid question, but I'm sure it's a fun one.Comment: I'm not sure if this is a valid question, but I'm sure it's a fun one.Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one.Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one.Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one.Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one.Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one.Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one.Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one.Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one.Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one.Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:11.548Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/ctranslate2/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/ctranslate2/", "description": "CTranslate2 is a C++ and Python library for efficient inference with", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4426", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"ctranslate2\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:11 GMT", "etag": "W/\"b0d0bd808d2b7ce8124c69378ebf84cd\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::5lwwz-1713753611436-e0b15082a08b" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/ctranslate2/", "property": "og:url" }, { "content": "CTranslate2 | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "CTranslate2 is a C++ and Python library for efficient inference with", "property": "og:description" } ], "title": "CTranslate2 | 🦜️🔗 LangChain" }
CTranslate2 CTranslate2 is a C++ and Python library for efficient inference with Transformer models. The project implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. Full list of features and supported models is included in the project’s repository. To start, please check out the official quickstart guide. To use, you should have ctranslate2 python package installed. %pip install --upgrade --quiet ctranslate2 To use a Hugging Face model with CTranslate2, it has to be first converted to CTranslate2 format using the ct2-transformers-converter command. The command takes the pretrained model name and the path to the converted model directory. # conversation can take several minutes !ct2-transformers-converter --model meta-llama/Llama-2-7b-hf --quantization bfloat16 --output_dir ./llama-2-7b-ct2 --force Loading checkpoint shards: 100%|██████████████████| 2/2 [00:01<00:00, 1.81it/s] from langchain_community.llms import CTranslate2 llm = CTranslate2( # output_dir from above: model_path="./llama-2-7b-ct2", tokenizer_name="meta-llama/Llama-2-7b-hf", device="cuda", # device_index can be either single int or list or ints, # indicating the ids of GPUs to use for inference: device_index=[0, 1], compute_type="bfloat16", ) Single call​ print( llm( "He presented me with plausible evidence for the existence of unicorns: ", max_length=256, sampling_topk=50, sampling_temperature=0.2, repetition_penalty=2, cache_static_prompt=False, ) ) He presented me with plausible evidence for the existence of unicorns: 1) they are mentioned in ancient texts; and, more importantly to him (and not so much as a matter that would convince most people), he had seen one. I was skeptical but I didn't want my friend upset by his belief being dismissed outright without any consideration or argument on its behalf whatsoever - which is why we were having this conversation at all! So instead asked if there might be some other explanation besides "unicorning"... maybe it could have been an ostrich? Or perhaps just another horse-like animal like zebras do exist afterall even though no humans alive today has ever witnesses them firsthand either due lacking accessibility/availability etc.. But then again those animals aren’ t exactly known around here anyway…” And thus began our discussion about whether these creatures actually existed anywhere else outside Earth itself where only few scientists ventured before us nowadays because technology allows exploration beyond borders once thought impossible centuries ago when travel meant walking everywhere yourself until reaching destination point A->B via footsteps alone unless someone helped guide along way through woods full darkness nighttime hours Multiple calls:​ print( llm.generate( ["The list of top romantic songs:\n1.", "The list of top rap songs:\n1."], max_length=128, ) ) generations=[[Generation(text='The list of top romantic songs:\n1. “I Will Always Love You” by Whitney Houston\n2. “Can’t Help Falling in Love” by Elvis Presley\n3. “Unchained Melody” by The Righteous Brothers\n4. “I Will Always Love You” by Dolly Parton\n5. “I Will Always Love You” by Whitney Houston\n6. “I Will Always Love You” by Dolly Parton\n7. “I Will Always Love You” by The Beatles\n8. “I Will Always Love You” by The Rol', generation_info=None)], [Generation(text='The list of top rap songs:\n1. “God’s Plan” by Drake\n2. “Rockstar” by Post Malone\n3. “Bad and Boujee” by Migos\n4. “Humble” by Kendrick Lamar\n5. “Bodak Yellow” by Cardi B\n6. “I’m the One” by DJ Khaled\n7. “Motorsport” by Migos\n8. “No Limit” by G-Eazy\n9. “Bounce Back” by Big Sean\n10. “', generation_info=None)]] llm_output=None run=[RunInfo(run_id=UUID('628e0491-a310-4d12-81db-6f2c5309d5c2')), RunInfo(run_id=UUID('f88fdbcd-c1f6-4f13-b575-810b80ecbaaf'))] Integrate the model in an LLMChain​ from langchain.chains import LLMChain from langchain_core.prompts import PromptTemplate template = """{question} Let's think step by step. """ prompt = PromptTemplate.from_template(template) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "Who was the US president in the year the first Pokemon game was released?" print(llm_chain.run(question)) Who was the US president in the year the first Pokemon game was released? Let's think step by step. 1996 was the year the first Pokemon game was released. \begin{blockquote} \begin{itemize} \item 1996 was the year Bill Clinton was president. \item 1996 was the year the first Pokemon game was released. \item 1996 was the year the first Pokemon game was released. \end{itemize} \end{blockquote} I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one.
https://python.langchain.com/docs/integrations/llms/databricks/
## Databricks The [Databricks](https://www.databricks.com/) Lakehouse Platform unifies data, analytics, and AI on one platform. This example notebook shows how to wrap Databricks endpoints as LLMs in LangChain. It supports two endpoint types: * Serving endpoint, recommended for production and development, * Cluster driver proxy app, recommended for interactive development. ## Installation[​](#installation "Direct link to Installation") `mlflow >= 2.9` is required to run the code in this notebook. If it’s not installed, please install it using this command: Also, we need `dbutils` for this example. ## Wrapping a serving endpoint: External model[​](#wrapping-a-serving-endpoint-external-model "Direct link to Wrapping a serving endpoint: External model") Prerequisite: Register an OpenAI API key as a secret: ``` databricks secrets create-scope <scope>databricks secrets put-secret <scope> openai-api-key --string-value $OPENAI_API_KEY ``` The following code creates a new serving endpoint with OpenAI’s GPT-4 model for chat and generates a response using the endpoint. ``` from langchain_community.chat_models import ChatDatabricksfrom langchain_core.messages import HumanMessagefrom mlflow.deployments import get_deploy_clientclient = get_deploy_client("databricks")secret = "secrets/<scope>/openai-api-key" # replace `<scope>` with your scopename = "my-chat" # rename this if my-chat already existsclient.create_endpoint( name=name, config={ "served_entities": [ { "name": "my-chat", "external_model": { "name": "gpt-4", "provider": "openai", "task": "llm/v1/chat", "openai_config": { "openai_api_key": "{{" + secret + "}}", }, }, } ], },)chat = ChatDatabricks( target_uri="databricks", endpoint=name, temperature=0.1,)chat([HumanMessage(content="hello")]) ``` ``` content='Hello! How can I assist you today?' ``` ## Wrapping a serving endpoint: Foundation model[​](#wrapping-a-serving-endpoint-foundation-model "Direct link to Wrapping a serving endpoint: Foundation model") The following code uses the `databricks-bge-large-en` serving endpoint (no endpoint creation is required) to generate embeddings from input text. ``` from langchain_community.embeddings import DatabricksEmbeddingsembeddings = DatabricksEmbeddings(endpoint="databricks-bge-large-en")embeddings.embed_query("hello")[:3] ``` ``` [0.051055908203125, 0.007221221923828125, 0.003879547119140625] ``` ## Wrapping a serving endpoint: Custom model[​](#wrapping-a-serving-endpoint-custom-model "Direct link to Wrapping a serving endpoint: Custom model") Prerequisites: * An LLM was registered and deployed to [a Databricks serving endpoint](https://docs.databricks.com/machine-learning/model-serving/index.html). * You have [“Can Query” permission](https://docs.databricks.com/security/auth-authz/access-control/serving-endpoint-acl.html) to the endpoint. The expected MLflow model signature is: * inputs: `[{"name": "prompt", "type": "string"}, {"name": "stop", "type": "list[string]"}]` * outputs: `[{"type": "string"}]` If the model signature is incompatible or you want to insert extra configs, you can set `transform_input_fn` and `transform_output_fn` accordingly. ``` from langchain_community.llms import Databricks# If running a Databricks notebook attached to an interactive cluster in "single user"# or "no isolation shared" mode, you only need to specify the endpoint name to create# a `Databricks` instance to query a serving endpoint in the same workspace.llm = Databricks(endpoint_name="dolly")llm("How are you?") ``` ``` 'I am happy to hear that you are in good health and as always, you are appreciated.' ``` ``` llm("How are you?", stop=["."]) ``` ``` # Otherwise, you can manually specify the Databricks workspace hostname and personal access token# or set `DATABRICKS_HOST` and `DATABRICKS_TOKEN` environment variables, respectively.# See https://docs.databricks.com/dev-tools/auth.html#databricks-personal-access-tokens# We strongly recommend not exposing the API token explicitly inside a notebook.# You can use Databricks secret manager to store your API token securely.# See https://docs.databricks.com/dev-tools/databricks-utils.html#secrets-utility-dbutilssecretsimport osimport dbutilsos.environ["DATABRICKS_TOKEN"] = dbutils.secrets.get("myworkspace", "api_token")llm = Databricks(host="myworkspace.cloud.databricks.com", endpoint_name="dolly")llm("How are you?") ``` ``` # If the serving endpoint accepts extra parameters like `temperature`,# you can set them in `model_kwargs`.llm = Databricks(endpoint_name="dolly", model_kwargs={"temperature": 0.1})llm("How are you?") ``` ``` # Use `transform_input_fn` and `transform_output_fn` if the serving endpoint# expects a different input schema and does not return a JSON string,# respectively, or you want to apply a prompt template on top.def transform_input(**request): full_prompt = f"""{request["prompt"]} Be Concise. """ request["prompt"] = full_prompt return requestllm = Databricks(endpoint_name="dolly", transform_input_fn=transform_input)llm("How are you?") ``` ## Wrapping a cluster driver proxy app[​](#wrapping-a-cluster-driver-proxy-app "Direct link to Wrapping a cluster driver proxy app") Prerequisites: * An LLM loaded on a Databricks interactive cluster in “single user” or “no isolation shared” mode. * A local HTTP server running on the driver node to serve the model at `"/"` using HTTP POST with JSON input/output. * It uses a port number between `[3000, 8000]` and listens to the driver IP address or simply `0.0.0.0` instead of localhost only. * You have “Can Attach To” permission to the cluster. The expected server schema (using JSON schema) is: * inputs: ``` {"type": "object", "properties": { "prompt": {"type": "string"}, "stop": {"type": "array", "items": {"type": "string"}}}, "required": ["prompt"]} ``` * outputs: `{"type": "string"}` If the server schema is incompatible or you want to insert extra configs, you can use `transform_input_fn` and `transform_output_fn` accordingly. The following is a minimal example for running a driver proxy app to serve an LLM: ``` from flask import Flask, request, jsonifyimport torchfrom transformers import pipeline, AutoTokenizer, StoppingCriteriamodel = "databricks/dolly-v2-3b"tokenizer = AutoTokenizer.from_pretrained(model, padding_side="left")dolly = pipeline(model=model, tokenizer=tokenizer, trust_remote_code=True, device_map="auto")device = dolly.deviceclass CheckStop(StoppingCriteria): def __init__(self, stop=None): super().__init__() self.stop = stop or [] self.matched = "" self.stop_ids = [tokenizer.encode(s, return_tensors='pt').to(device) for s in self.stop] def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs): for i, s in enumerate(self.stop_ids): if torch.all((s == input_ids[0][-s.shape[1]:])).item(): self.matched = self.stop[i] return True return Falsedef llm(prompt, stop=None, **kwargs): check_stop = CheckStop(stop) result = dolly(prompt, stopping_criteria=[check_stop], **kwargs) return result[0]["generated_text"].rstrip(check_stop.matched)app = Flask("dolly")@app.route('/', methods=['POST'])def serve_llm(): resp = llm(**request.json) return jsonify(resp)app.run(host="0.0.0.0", port="7777") ``` Once the server is running, you can create a `Databricks` instance to wrap it as an LLM. ``` # If running a Databricks notebook attached to the same cluster that runs the app,# you only need to specify the driver port to create a `Databricks` instance.llm = Databricks(cluster_driver_port="7777")llm("How are you?") ``` ``` 'Hello, thank you for asking. It is wonderful to hear that you are well.' ``` ``` # Otherwise, you can manually specify the cluster ID to use,# as well as Databricks workspace hostname and personal access token.llm = Databricks(cluster_id="0000-000000-xxxxxxxx", cluster_driver_port="7777")llm("How are you?") ``` ``` # If the app accepts extra parameters like `temperature`,# you can set them in `model_kwargs`.llm = Databricks(cluster_driver_port="7777", model_kwargs={"temperature": 0.1})llm("How are you?") ``` ``` 'I am very well. It is a pleasure to meet you.' ``` ``` # Use `transform_input_fn` and `transform_output_fn` if the app# expects a different input schema and does not return a JSON string,# respectively, or you want to apply a prompt template on top.def transform_input(**request): full_prompt = f"""{request["prompt"]} Be Concise. """ request["prompt"] = full_prompt return requestdef transform_output(response): return response.upper()llm = Databricks( cluster_driver_port="7777", transform_input_fn=transform_input, transform_output_fn=transform_output,)llm("How are you?") ``` ``` 'I AM DOING GREAT THANK YOU.' ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:12.122Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/databricks/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/databricks/", "description": "The Databricks Lakehouse Platform unifies", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3500", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"databricks\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:12 GMT", "etag": "W/\"6b2b29dcd8502b30cf348938bceffab0\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::vp7cr-1713753612061-740042b41b72" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/databricks/", "property": "og:url" }, { "content": "Databricks | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "The Databricks Lakehouse Platform unifies", "property": "og:description" } ], "title": "Databricks | 🦜️🔗 LangChain" }
Databricks The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. This example notebook shows how to wrap Databricks endpoints as LLMs in LangChain. It supports two endpoint types: Serving endpoint, recommended for production and development, Cluster driver proxy app, recommended for interactive development. Installation​ mlflow >= 2.9 is required to run the code in this notebook. If it’s not installed, please install it using this command: Also, we need dbutils for this example. Wrapping a serving endpoint: External model​ Prerequisite: Register an OpenAI API key as a secret: databricks secrets create-scope <scope> databricks secrets put-secret <scope> openai-api-key --string-value $OPENAI_API_KEY The following code creates a new serving endpoint with OpenAI’s GPT-4 model for chat and generates a response using the endpoint. from langchain_community.chat_models import ChatDatabricks from langchain_core.messages import HumanMessage from mlflow.deployments import get_deploy_client client = get_deploy_client("databricks") secret = "secrets/<scope>/openai-api-key" # replace `<scope>` with your scope name = "my-chat" # rename this if my-chat already exists client.create_endpoint( name=name, config={ "served_entities": [ { "name": "my-chat", "external_model": { "name": "gpt-4", "provider": "openai", "task": "llm/v1/chat", "openai_config": { "openai_api_key": "{{" + secret + "}}", }, }, } ], }, ) chat = ChatDatabricks( target_uri="databricks", endpoint=name, temperature=0.1, ) chat([HumanMessage(content="hello")]) content='Hello! How can I assist you today?' Wrapping a serving endpoint: Foundation model​ The following code uses the databricks-bge-large-en serving endpoint (no endpoint creation is required) to generate embeddings from input text. from langchain_community.embeddings import DatabricksEmbeddings embeddings = DatabricksEmbeddings(endpoint="databricks-bge-large-en") embeddings.embed_query("hello")[:3] [0.051055908203125, 0.007221221923828125, 0.003879547119140625] Wrapping a serving endpoint: Custom model​ Prerequisites: An LLM was registered and deployed to a Databricks serving endpoint. You have “Can Query” permission to the endpoint. The expected MLflow model signature is: inputs: [{"name": "prompt", "type": "string"}, {"name": "stop", "type": "list[string]"}] outputs: [{"type": "string"}] If the model signature is incompatible or you want to insert extra configs, you can set transform_input_fn and transform_output_fn accordingly. from langchain_community.llms import Databricks # If running a Databricks notebook attached to an interactive cluster in "single user" # or "no isolation shared" mode, you only need to specify the endpoint name to create # a `Databricks` instance to query a serving endpoint in the same workspace. llm = Databricks(endpoint_name="dolly") llm("How are you?") 'I am happy to hear that you are in good health and as always, you are appreciated.' llm("How are you?", stop=["."]) # Otherwise, you can manually specify the Databricks workspace hostname and personal access token # or set `DATABRICKS_HOST` and `DATABRICKS_TOKEN` environment variables, respectively. # See https://docs.databricks.com/dev-tools/auth.html#databricks-personal-access-tokens # We strongly recommend not exposing the API token explicitly inside a notebook. # You can use Databricks secret manager to store your API token securely. # See https://docs.databricks.com/dev-tools/databricks-utils.html#secrets-utility-dbutilssecrets import os import dbutils os.environ["DATABRICKS_TOKEN"] = dbutils.secrets.get("myworkspace", "api_token") llm = Databricks(host="myworkspace.cloud.databricks.com", endpoint_name="dolly") llm("How are you?") # If the serving endpoint accepts extra parameters like `temperature`, # you can set them in `model_kwargs`. llm = Databricks(endpoint_name="dolly", model_kwargs={"temperature": 0.1}) llm("How are you?") # Use `transform_input_fn` and `transform_output_fn` if the serving endpoint # expects a different input schema and does not return a JSON string, # respectively, or you want to apply a prompt template on top. def transform_input(**request): full_prompt = f"""{request["prompt"]} Be Concise. """ request["prompt"] = full_prompt return request llm = Databricks(endpoint_name="dolly", transform_input_fn=transform_input) llm("How are you?") Wrapping a cluster driver proxy app​ Prerequisites: An LLM loaded on a Databricks interactive cluster in “single user” or “no isolation shared” mode. A local HTTP server running on the driver node to serve the model at "/" using HTTP POST with JSON input/output. It uses a port number between [3000, 8000] and listens to the driver IP address or simply 0.0.0.0 instead of localhost only. You have “Can Attach To” permission to the cluster. The expected server schema (using JSON schema) is: inputs: {"type": "object", "properties": { "prompt": {"type": "string"}, "stop": {"type": "array", "items": {"type": "string"}}}, "required": ["prompt"]} outputs: {"type": "string"} If the server schema is incompatible or you want to insert extra configs, you can use transform_input_fn and transform_output_fn accordingly. The following is a minimal example for running a driver proxy app to serve an LLM: from flask import Flask, request, jsonify import torch from transformers import pipeline, AutoTokenizer, StoppingCriteria model = "databricks/dolly-v2-3b" tokenizer = AutoTokenizer.from_pretrained(model, padding_side="left") dolly = pipeline(model=model, tokenizer=tokenizer, trust_remote_code=True, device_map="auto") device = dolly.device class CheckStop(StoppingCriteria): def __init__(self, stop=None): super().__init__() self.stop = stop or [] self.matched = "" self.stop_ids = [tokenizer.encode(s, return_tensors='pt').to(device) for s in self.stop] def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs): for i, s in enumerate(self.stop_ids): if torch.all((s == input_ids[0][-s.shape[1]:])).item(): self.matched = self.stop[i] return True return False def llm(prompt, stop=None, **kwargs): check_stop = CheckStop(stop) result = dolly(prompt, stopping_criteria=[check_stop], **kwargs) return result[0]["generated_text"].rstrip(check_stop.matched) app = Flask("dolly") @app.route('/', methods=['POST']) def serve_llm(): resp = llm(**request.json) return jsonify(resp) app.run(host="0.0.0.0", port="7777") Once the server is running, you can create a Databricks instance to wrap it as an LLM. # If running a Databricks notebook attached to the same cluster that runs the app, # you only need to specify the driver port to create a `Databricks` instance. llm = Databricks(cluster_driver_port="7777") llm("How are you?") 'Hello, thank you for asking. It is wonderful to hear that you are well.' # Otherwise, you can manually specify the cluster ID to use, # as well as Databricks workspace hostname and personal access token. llm = Databricks(cluster_id="0000-000000-xxxxxxxx", cluster_driver_port="7777") llm("How are you?") # If the app accepts extra parameters like `temperature`, # you can set them in `model_kwargs`. llm = Databricks(cluster_driver_port="7777", model_kwargs={"temperature": 0.1}) llm("How are you?") 'I am very well. It is a pleasure to meet you.' # Use `transform_input_fn` and `transform_output_fn` if the app # expects a different input schema and does not return a JSON string, # respectively, or you want to apply a prompt template on top. def transform_input(**request): full_prompt = f"""{request["prompt"]} Be Concise. """ request["prompt"] = full_prompt return request def transform_output(response): return response.upper() llm = Databricks( cluster_driver_port="7777", transform_input_fn=transform_input, transform_output_fn=transform_output, ) llm("How are you?") 'I AM DOING GREAT THANK YOU.'
https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm/
## Google Cloud Vertex AI **Note:** This is separate from the `Google Generative AI` integration, it exposes [Vertex AI Generative API](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/overview) on `Google Cloud`. VertexAI exposes all foundational models available in google cloud: - Gemini (`gemini-pro` and `gemini-pro-vision`) - Palm 2 for Text (`text-bison`) - Codey for Code Generation (`code-bison`) For a full and updated list of available models visit [VertexAI documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/overview) ## Setup[​](#setup "Direct link to Setup") By default, Google Cloud [does not use](https://cloud.google.com/vertex-ai/docs/generative-ai/data-governance#foundation_model_development) customer data to train its foundation models as part of Google Cloud’s AI/ML Privacy Commitment. More details about how Google processes data can also be found in [Google’s Customer Data Processing Addendum (CDPA)](https://cloud.google.com/terms/data-processing-addendum). To use `Vertex AI Generative AI` you must have the `langchain-google-vertexai` Python package installed and either: - Have credentials configured for your environment (gcloud, workload identity, etc…) - Store the path to a service account JSON file as the GOOGLE\_APPLICATION\_CREDENTIALS environment variable This codebase uses the `google.auth` library which first looks for the application credentials variable mentioned above, and then looks for system-level auth. For more information, see: - [https://cloud.google.com/docs/authentication/application-default-credentials#GAC](https://cloud.google.com/docs/authentication/application-default-credentials#GAC) - [https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth](https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth) ``` %pip install --upgrade --quiet langchain-core langchain-google-vertexai ``` ``` [notice] A new release of pip is available: 23.2.1 -> 23.3.2[notice] To update, run: pip install --upgrade pipNote: you may need to restart the kernel to use updated packages. ``` ## Usage[​](#usage "Direct link to Usage") VertexAI supports all [LLM](https://python.langchain.com/docs/modules/model_io/llms/) functionality. ``` from langchain_google_vertexai import VertexAI ``` ``` model = VertexAI(model_name="gemini-pro") ``` ``` message = "What are some of the pros and cons of Python as a programming language?"model.invoke(message) ``` ``` '**Pros:**\n\n* **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike.\n* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting.\n* **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available.\n* **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality.\n* **Cross-platform:** Python is available for a' ``` ``` await model.ainvoke(message) ``` ``` '**Pros:**\n\n* **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike.\n* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting.\n* **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available.\n* **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality.\n* **Cross-platform:** Python is available for a' ``` ``` for chunk in model.stream(message): print(chunk, end="", flush=True) ``` ``` **Pros:*** **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike.* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting.* **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available.* **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality.* **Cross-platform:** Python is available for a ``` ``` ['**Pros:**\n\n* **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike.\n* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting.\n* **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available.\n* **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality.\n* **Cross-platform:** Python is available for a'] ``` We can use the `generate` method to get back extra metadata like [safety attributes](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/responsible-ai#safety_attribute_confidence_scoring) and not just text completions. ``` result = model.generate([message])result.generations ``` ``` [[GenerationChunk(text='**Pros:**\n\n* **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike.\n* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting.\n* **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available.\n* **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality.\n* **Cross-platform:** Python is available for a')]] ``` ``` result = await model.agenerate([message])result.generations ``` ``` [[GenerationChunk(text='**Pros:**\n\n* **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike.\n* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting.\n* **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available.\n* **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality.\n* **Cross-platform:** Python is available for a')]] ``` You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](https://python.langchain.com/docs/expression_language/) ``` from langchain_core.prompts import PromptTemplatetemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template)chain = prompt | modelquestion = """I have five apples. I throw two away. I eat one. How many apples do I have left?"""print(chain.invoke({"question": question})) ``` ``` 1. You start with 5 apples.2. You throw away 2 apples, so you have 5 - 2 = 3 apples left.3. You eat 1 apple, so you have 3 - 1 = 2 apples left.Therefore, you have 2 apples left. ``` You can use different foundational models for specialized in different tasks. For an updated list of available models visit [VertexAI documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/overview) ``` llm = VertexAI(model_name="code-bison", max_output_tokens=1000, temperature=0.3)question = "Write a python function that checks if a string is a valid email address"print(model.invoke(question)) ``` ```` ```pythonimport redef is_valid_email(email): """ Checks if a string is a valid email address. Args: email: The string to check. Returns: True if the string is a valid email address, False otherwise. """ # Compile the regular expression for an email address. regex = re.compile(r"[^@]+@[^@]+\.[^@]+") # Check if the string matches the regular expression. return regex.match(email) is not None``` ```` ## Multimodality[​](#multimodality "Direct link to Multimodality") With Gemini, you can use LLM in a multimodal mode: ``` from langchain_core.messages import HumanMessagefrom langchain_google_vertexai import ChatVertexAIllm = ChatVertexAI(model_name="gemini-pro-vision")image_message = { "type": "image_url", "image_url": {"url": "image_example.jpg"},}text_message = { "type": "text", "text": "What is shown in this image?",}message = HumanMessage(content=[text_message, image_message])output = llm([message])print(output.content) ``` ``` This is a Yorkshire Terrier. ``` Let’s double-check it’s a cat :) ``` from vertexai.preview.generative_models import Imagei = Image.load_from_file("image_example.jpg")i ``` ![](https://python.langchain.com/assets/images/cell-14-output-1-0c7fb8b94ff032d51bfe1880d8370104.png) You can also pass images as bytes: ``` import base64with open("image_example.jpg", "rb") as image_file: image_bytes = image_file.read()image_message = { "type": "image_url", "image_url": { "url": f"data:image/jpeg;base64,{base64.b64encode(image_bytes).decode('utf-8')}" },}text_message = { "type": "text", "text": "What is shown in this image?",}message = HumanMessage(content=[text_message, image_message])output = llm([message])print(output.content) ``` ``` This is a Yorkshire Terrier. ``` Please, note that you can also use the image stored in GCS (just point the `url` to the full GCS path, starting with `gs://` instead of a local one). And you can also pass a history of a previous chat to the LLM: ``` message2 = HumanMessage(content="And where the image is taken?")output2 = llm([message, output, message2])print(output2.content) ``` You can also use the public image URL: ``` image_message = { "type": "image_url", "image_url": { "url": "https://python.langchain.com/assets/images/cell-18-output-1-0c7fb8b94ff032d51bfe1880d8370104.png", },}text_message = { "type": "text", "text": "What is shown in this image?",}message = HumanMessage(content=[text_message, image_message])output = llm([message])print(output.content) ``` ## Vertex Model Garden[​](#vertex-model-garden "Direct link to Vertex Model Garden") Vertex Model Garden [exposes](https://cloud.google.com/vertex-ai/docs/start/explore-models) open-sourced models that can be deployed and served on Vertex AI. If you have successfully deployed a model from Vertex Model Garden, you can find a corresponding Vertex AI [endpoint](https://cloud.google.com/vertex-ai/docs/general/deployment#what_happens_when_you_deploy_a_model) in the console or via API. ``` from langchain_google_vertexai import VertexAIModelGarden ``` ``` llm = VertexAIModelGarden(project="YOUR PROJECT", endpoint_id="YOUR ENDPOINT_ID") ``` ``` llm.invoke("What is the meaning of life?") ``` Like all LLMs, we can then compose it with other components: ``` prompt = PromptTemplate.from_template("What is the meaning of {thing}?") ``` ``` chain = prompt | llmprint(chain.invoke({"thing": "life"})) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:13.120Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm/", "description": "Note: This is separate from the Google Generative AI integration,", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "5692", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_vertex_ai_palm\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:13 GMT", "etag": "W/\"07099588d5eae381a5c43be1f6eaa161\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::ntrq5-1713753613057-8fc3e68756ea" }, "jsonLd": null, "keywords": "gemini,vertex,VertexAI,gemini-pro", "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm/", "property": "og:url" }, { "content": "Google Cloud Vertex AI | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Note: This is separate from the Google Generative AI integration,", "property": "og:description" } ], "title": "Google Cloud Vertex AI | 🦜️🔗 LangChain" }
Google Cloud Vertex AI Note: This is separate from the Google Generative AI integration, it exposes Vertex AI Generative API on Google Cloud. VertexAI exposes all foundational models available in google cloud: - Gemini (gemini-pro and gemini-pro-vision) - Palm 2 for Text (text-bison) - Codey for Code Generation (code-bison) For a full and updated list of available models visit VertexAI documentation Setup​ By default, Google Cloud does not use customer data to train its foundation models as part of Google Cloud’s AI/ML Privacy Commitment. More details about how Google processes data can also be found in Google’s Customer Data Processing Addendum (CDPA). To use Vertex AI Generative AI you must have the langchain-google-vertexai Python package installed and either: - Have credentials configured for your environment (gcloud, workload identity, etc…) - Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variable This codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth. For more information, see: - https://cloud.google.com/docs/authentication/application-default-credentials#GAC - https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth %pip install --upgrade --quiet langchain-core langchain-google-vertexai [notice] A new release of pip is available: 23.2.1 -> 23.3.2 [notice] To update, run: pip install --upgrade pip Note: you may need to restart the kernel to use updated packages. Usage​ VertexAI supports all LLM functionality. from langchain_google_vertexai import VertexAI model = VertexAI(model_name="gemini-pro") message = "What are some of the pros and cons of Python as a programming language?" model.invoke(message) '**Pros:**\n\n* **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike.\n* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting.\n* **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available.\n* **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality.\n* **Cross-platform:** Python is available for a' await model.ainvoke(message) '**Pros:**\n\n* **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike.\n* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting.\n* **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available.\n* **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality.\n* **Cross-platform:** Python is available for a' for chunk in model.stream(message): print(chunk, end="", flush=True) **Pros:** * **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike. * **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting. * **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available. * **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality. * **Cross-platform:** Python is available for a ['**Pros:**\n\n* **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike.\n* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting.\n* **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available.\n* **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality.\n* **Cross-platform:** Python is available for a'] We can use the generate method to get back extra metadata like safety attributes and not just text completions. result = model.generate([message]) result.generations [[GenerationChunk(text='**Pros:**\n\n* **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike.\n* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting.\n* **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available.\n* **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality.\n* **Cross-platform:** Python is available for a')]] result = await model.agenerate([message]) result.generations [[GenerationChunk(text='**Pros:**\n\n* **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike.\n* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting.\n* **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available.\n* **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality.\n* **Cross-platform:** Python is available for a')]] You can also easily combine with a prompt template for easy structuring of user input. We can do this using LCEL from langchain_core.prompts import PromptTemplate template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate.from_template(template) chain = prompt | model question = """ I have five apples. I throw two away. I eat one. How many apples do I have left? """ print(chain.invoke({"question": question})) 1. You start with 5 apples. 2. You throw away 2 apples, so you have 5 - 2 = 3 apples left. 3. You eat 1 apple, so you have 3 - 1 = 2 apples left. Therefore, you have 2 apples left. You can use different foundational models for specialized in different tasks. For an updated list of available models visit VertexAI documentation llm = VertexAI(model_name="code-bison", max_output_tokens=1000, temperature=0.3) question = "Write a python function that checks if a string is a valid email address" print(model.invoke(question)) ```python import re def is_valid_email(email): """ Checks if a string is a valid email address. Args: email: The string to check. Returns: True if the string is a valid email address, False otherwise. """ # Compile the regular expression for an email address. regex = re.compile(r"[^@]+@[^@]+\.[^@]+") # Check if the string matches the regular expression. return regex.match(email) is not None ``` Multimodality​ With Gemini, you can use LLM in a multimodal mode: from langchain_core.messages import HumanMessage from langchain_google_vertexai import ChatVertexAI llm = ChatVertexAI(model_name="gemini-pro-vision") image_message = { "type": "image_url", "image_url": {"url": "image_example.jpg"}, } text_message = { "type": "text", "text": "What is shown in this image?", } message = HumanMessage(content=[text_message, image_message]) output = llm([message]) print(output.content) This is a Yorkshire Terrier. Let’s double-check it’s a cat :) from vertexai.preview.generative_models import Image i = Image.load_from_file("image_example.jpg") i You can also pass images as bytes: import base64 with open("image_example.jpg", "rb") as image_file: image_bytes = image_file.read() image_message = { "type": "image_url", "image_url": { "url": f"data:image/jpeg;base64,{base64.b64encode(image_bytes).decode('utf-8')}" }, } text_message = { "type": "text", "text": "What is shown in this image?", } message = HumanMessage(content=[text_message, image_message]) output = llm([message]) print(output.content) This is a Yorkshire Terrier. Please, note that you can also use the image stored in GCS (just point the url to the full GCS path, starting with gs:// instead of a local one). And you can also pass a history of a previous chat to the LLM: message2 = HumanMessage(content="And where the image is taken?") output2 = llm([message, output, message2]) print(output2.content) You can also use the public image URL: image_message = { "type": "image_url", "image_url": { "url": "https://python.langchain.com/assets/images/cell-18-output-1-0c7fb8b94ff032d51bfe1880d8370104.png", }, } text_message = { "type": "text", "text": "What is shown in this image?", } message = HumanMessage(content=[text_message, image_message]) output = llm([message]) print(output.content) Vertex Model Garden​ Vertex Model Garden exposes open-sourced models that can be deployed and served on Vertex AI. If you have successfully deployed a model from Vertex Model Garden, you can find a corresponding Vertex AI endpoint in the console or via API. from langchain_google_vertexai import VertexAIModelGarden llm = VertexAIModelGarden(project="YOUR PROJECT", endpoint_id="YOUR ENDPOINT_ID") llm.invoke("What is the meaning of life?") Like all LLMs, we can then compose it with other components: prompt = PromptTemplate.from_template("What is the meaning of {thing}?") chain = prompt | llm print(chain.invoke({"thing": "life"}))
https://python.langchain.com/docs/integrations/llms/forefrontai/
## ForefrontAI The `Forefront` platform gives you the ability to fine-tune and use [open-source large language models](https://docs.forefront.ai/forefront/master/models). This notebook goes over how to use Langchain with [ForefrontAI](https://www.forefront.ai/). ## Imports[​](#imports "Direct link to Imports") ``` import osfrom langchain.chains import LLMChainfrom langchain_community.llms import ForefrontAIfrom langchain_core.prompts import PromptTemplate ``` ## Set the Environment API Key[​](#set-the-environment-api-key "Direct link to Set the Environment API Key") Make sure to get your API key from ForefrontAI. You are given a 5 day free trial to test different models. ``` # get a new token: https://docs.forefront.ai/forefront/api-reference/authenticationfrom getpass import getpassFOREFRONTAI_API_KEY = getpass() ``` ``` os.environ["FOREFRONTAI_API_KEY"] = FOREFRONTAI_API_KEY ``` ## Create the ForefrontAI instance[​](#create-the-forefrontai-instance "Direct link to Create the ForefrontAI instance") You can specify different parameters such as the model endpoint url, length, temperature, etc. You must provide an endpoint url. ``` llm = ForefrontAI(endpoint_url="YOUR ENDPOINT URL HERE") ``` ## Create a Prompt Template[​](#create-a-prompt-template "Direct link to Create a Prompt Template") We will create a prompt template for Question and Answer. ``` template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template) ``` ## Initiate the LLMChain[​](#initiate-the-llmchain "Direct link to Initiate the LLMChain") ``` llm_chain = LLMChain(prompt=prompt, llm=llm) ``` ## Run the LLMChain[​](#run-the-llmchain "Direct link to Run the LLMChain") Provide a question and run the LLMChain. ``` question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:13.519Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/forefrontai/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/forefrontai/", "description": "The Forefront platform gives you the ability to fine-tune and use", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "1084", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"forefrontai\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:13 GMT", "etag": "W/\"6b36e5882b6d874e3122e38c35d0dc35\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::5x4mb-1713753613105-9fc4e8409446" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/forefrontai/", "property": "og:url" }, { "content": "ForefrontAI | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "The Forefront platform gives you the ability to fine-tune and use", "property": "og:description" } ], "title": "ForefrontAI | 🦜️🔗 LangChain" }
ForefrontAI The Forefront platform gives you the ability to fine-tune and use open-source large language models. This notebook goes over how to use Langchain with ForefrontAI. Imports​ import os from langchain.chains import LLMChain from langchain_community.llms import ForefrontAI from langchain_core.prompts import PromptTemplate Set the Environment API Key​ Make sure to get your API key from ForefrontAI. You are given a 5 day free trial to test different models. # get a new token: https://docs.forefront.ai/forefront/api-reference/authentication from getpass import getpass FOREFRONTAI_API_KEY = getpass() os.environ["FOREFRONTAI_API_KEY"] = FOREFRONTAI_API_KEY Create the ForefrontAI instance​ You can specify different parameters such as the model endpoint url, length, temperature, etc. You must provide an endpoint url. llm = ForefrontAI(endpoint_url="YOUR ENDPOINT URL HERE") Create a Prompt Template​ We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate.from_template(template) Initiate the LLMChain​ llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain​ Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question)
https://python.langchain.com/docs/integrations/llms/gigachat/
## GigaChat This notebook shows how to use LangChain with [GigaChat](https://developers.sber.ru/portal/products/gigachat). To use you need to install `gigachat` python package. ``` %pip install --upgrade --quiet gigachat ``` To get GigaChat credentials you need to [create account](https://developers.sber.ru/studio/login) and [get access to API](https://developers.sber.ru/docs/ru/gigachat/individuals-quickstart) ## Example[​](#example "Direct link to Example") ``` import osfrom getpass import getpassos.environ["GIGACHAT_CREDENTIALS"] = getpass() ``` ``` from langchain_community.llms import GigaChatllm = GigaChat(verify_ssl_certs=False, scope="GIGACHAT_API_PERS") ``` ``` from langchain.chains import LLMChainfrom langchain_core.prompts import PromptTemplatetemplate = "What is capital of {country}?"prompt = PromptTemplate.from_template(template)llm_chain = LLMChain(prompt=prompt, llm=llm)generated = llm_chain.invoke(input={"country": "Russia"})print(generated["text"]) ``` ``` The capital of Russia is Moscow. ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:13.617Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/gigachat/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/gigachat/", "description": "This notebook shows how to use LangChain with", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"gigachat\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:13 GMT", "etag": "W/\"b42a263be1ef8cf07353a30b90eb5b27\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::b8755-1713753613064-360028663a20" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/gigachat/", "property": "og:url" }, { "content": "GigaChat | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This notebook shows how to use LangChain with", "property": "og:description" } ], "title": "GigaChat | 🦜️🔗 LangChain" }
GigaChat This notebook shows how to use LangChain with GigaChat. To use you need to install gigachat python package. %pip install --upgrade --quiet gigachat To get GigaChat credentials you need to create account and get access to API Example​ import os from getpass import getpass os.environ["GIGACHAT_CREDENTIALS"] = getpass() from langchain_community.llms import GigaChat llm = GigaChat(verify_ssl_certs=False, scope="GIGACHAT_API_PERS") from langchain.chains import LLMChain from langchain_core.prompts import PromptTemplate template = "What is capital of {country}?" prompt = PromptTemplate.from_template(template) llm_chain = LLMChain(prompt=prompt, llm=llm) generated = llm_chain.invoke(input={"country": "Russia"}) print(generated["text"]) The capital of Russia is Moscow. Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/llms/fireworks/
## Fireworks > [Fireworks](https://app.fireworks.ai/) accelerates product development on generative AI by creating an innovative AI experiment and production platform. This example goes over how to use LangChain to interact with `Fireworks` models. ``` %pip install -qU langchain-fireworks ``` ``` from langchain_fireworks import Fireworks ``` ## Setup 1. Make sure the `langchain-fireworks` package is installed in your environment. 2. Sign in to [Fireworks AI](http://fireworks.ai/) for the an API Key to access our models, and make sure it is set as the `FIREWORKS_API_KEY` environment variable. 3. Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-7b-chat. See the full, most up-to-date model list on [fireworks.ai](https://fireworks.ai/). ``` import getpassimport osfrom langchain_fireworks import Fireworksif "FIREWORKS_API_KEY" not in os.environ: os.environ["FIREWORKS_API_KEY"] = getpass.getpass("Fireworks API Key:")# Initialize a Fireworks modelllm = Fireworks( model="accounts/fireworks/models/mixtral-8x7b-instruct", base_url="https://api.fireworks.ai/inference/v1/completions",) ``` ## Calling the Model Directly You can call the model directly with string prompts to get completions. ``` # Single promptoutput = llm.invoke("Who's the best quarterback in the NFL?")print(output) ``` ``` Even if Tom Brady wins today, he'd still have the same ``` ``` # Calling multiple promptsoutput = llm.generate( [ "Who's the best cricket player in 2016?", "Who's the best basketball player in the league?", ])print(output.generations) ``` ``` [[Generation(text='\n\nR Ashwin is currently the best. He is an all rounder')], [Generation(text='\nIn your opinion, who has the best overall statistics between Michael Jordan and Le')]] ``` ``` # Setting additional parameters: temperature, max_tokens, top_pllm = Fireworks( model="accounts/fireworks/models/mixtral-8x7b-instruct", temperature=0.7, max_tokens=15, top_p=1.0,)print(llm.invoke("What's the weather like in Kansas City in December?")) ``` ``` The weather in Kansas City in December is generally cold and snowy. The ``` ## Simple Chain with Non-Chat Model You can use the LangChain Expression Language to create a simple chain with non-chat models. ``` from langchain_core.prompts import PromptTemplatefrom langchain_fireworks import Fireworksllm = Fireworks( model="accounts/fireworks/models/mixtral-8x7b-instruct", model_kwargs={"temperature": 0, "max_tokens": 100, "top_p": 1.0},)prompt = PromptTemplate.from_template("Tell me a joke about {topic}?")chain = prompt | llmprint(chain.invoke({"topic": "bears"})) ``` ``` What do you call a bear with no teeth? A gummy bear!User: What do you call a bear with no teeth and no legs? A gummy bear!Computer: That's the same joke! You told the same joke I just told. ``` You can stream the output, if you want. ``` for token in chain.stream({"topic": "bears"}): print(token, end="", flush=True) ``` ``` What do you call a bear with no teeth? A gummy bear!User: What do you call a bear with no teeth and no legs? A gummy bear!Computer: That's the same joke! You told the same joke I just told. ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:13.702Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/fireworks/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/fireworks/", "description": "Fireworks accelerates product development", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3500", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"fireworks\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:13 GMT", "etag": "W/\"d8e117d8ea500fecc00a98efdc38d07f\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::nqbp6-1713753613088-eac41bba2b94" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/fireworks/", "property": "og:url" }, { "content": "Fireworks | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Fireworks accelerates product development", "property": "og:description" } ], "title": "Fireworks | 🦜️🔗 LangChain" }
Fireworks Fireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform. This example goes over how to use LangChain to interact with Fireworks models. %pip install -qU langchain-fireworks from langchain_fireworks import Fireworks Setup Make sure the langchain-fireworks package is installed in your environment. Sign in to Fireworks AI for the an API Key to access our models, and make sure it is set as the FIREWORKS_API_KEY environment variable. Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-7b-chat. See the full, most up-to-date model list on fireworks.ai. import getpass import os from langchain_fireworks import Fireworks if "FIREWORKS_API_KEY" not in os.environ: os.environ["FIREWORKS_API_KEY"] = getpass.getpass("Fireworks API Key:") # Initialize a Fireworks model llm = Fireworks( model="accounts/fireworks/models/mixtral-8x7b-instruct", base_url="https://api.fireworks.ai/inference/v1/completions", ) Calling the Model Directly You can call the model directly with string prompts to get completions. # Single prompt output = llm.invoke("Who's the best quarterback in the NFL?") print(output) Even if Tom Brady wins today, he'd still have the same # Calling multiple prompts output = llm.generate( [ "Who's the best cricket player in 2016?", "Who's the best basketball player in the league?", ] ) print(output.generations) [[Generation(text='\n\nR Ashwin is currently the best. He is an all rounder')], [Generation(text='\nIn your opinion, who has the best overall statistics between Michael Jordan and Le')]] # Setting additional parameters: temperature, max_tokens, top_p llm = Fireworks( model="accounts/fireworks/models/mixtral-8x7b-instruct", temperature=0.7, max_tokens=15, top_p=1.0, ) print(llm.invoke("What's the weather like in Kansas City in December?")) The weather in Kansas City in December is generally cold and snowy. The Simple Chain with Non-Chat Model You can use the LangChain Expression Language to create a simple chain with non-chat models. from langchain_core.prompts import PromptTemplate from langchain_fireworks import Fireworks llm = Fireworks( model="accounts/fireworks/models/mixtral-8x7b-instruct", model_kwargs={"temperature": 0, "max_tokens": 100, "top_p": 1.0}, ) prompt = PromptTemplate.from_template("Tell me a joke about {topic}?") chain = prompt | llm print(chain.invoke({"topic": "bears"})) What do you call a bear with no teeth? A gummy bear! User: What do you call a bear with no teeth and no legs? A gummy bear! Computer: That's the same joke! You told the same joke I just told. You can stream the output, if you want. for token in chain.stream({"topic": "bears"}): print(token, end="", flush=True) What do you call a bear with no teeth? A gummy bear! User: What do you call a bear with no teeth and no legs? A gummy bear! Computer: That's the same joke! You told the same joke I just told.
https://python.langchain.com/docs/integrations/llms/edenai/
## Eden AI Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website: [https://edenai.co/](https://edenai.co/)) This example goes over how to use LangChain to interact with Eden AI models * * * Accessing the EDENAI’s API requires an API key, which you can get by creating an account [https://app.edenai.run/user/register](https://app.edenai.run/user/register) and heading here [https://app.edenai.run/admin/account/settings](https://app.edenai.run/admin/account/settings) Once we have a key we’ll want to set it as an environment variable by running: ``` export EDENAI_API_KEY="..." ``` If you’d prefer not to set an environment variable you can pass the key in directly via the edenai\_api\_key named parameter when initiating the EdenAI LLM class: ``` from langchain_community.llms import EdenAI ``` ``` llm = EdenAI(edenai_api_key="...", provider="openai", temperature=0.2, max_tokens=250) ``` ## Calling a model[​](#calling-a-model "Direct link to Calling a model") The EdenAI API brings together various providers, each offering multiple models. To access a specific model, you can simply add ‘model’ during instantiation. For instance, let’s explore the models provided by OpenAI, such as GPT3.5 ### text generation[​](#text-generation "Direct link to text generation") ``` from langchain.chains import LLMChainfrom langchain_core.prompts import PromptTemplatellm = EdenAI( feature="text", provider="openai", model="gpt-3.5-turbo-instruct", temperature=0.2, max_tokens=250,)prompt = """User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?Assistant:"""llm(prompt) ``` ### image generation[​](#image-generation "Direct link to image generation") ``` import base64from io import BytesIOfrom PIL import Imagedef print_base64_image(base64_string): # Decode the base64 string into binary data decoded_data = base64.b64decode(base64_string) # Create an in-memory stream to read the binary data image_stream = BytesIO(decoded_data) # Open the image using PIL image = Image.open(image_stream) # Display the image image.show() ``` ``` text2image = EdenAI(feature="image", provider="openai", resolution="512x512") ``` ``` image_output = text2image("A cat riding a motorcycle by Picasso") ``` ``` print_base64_image(image_output) ``` ### text generation with callback[​](#text-generation-with-callback "Direct link to text generation with callback") ``` from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerfrom langchain_community.llms import EdenAIllm = EdenAI( callbacks=[StreamingStdOutCallbackHandler()], feature="text", provider="openai", temperature=0.2, max_tokens=250,)prompt = """User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?Assistant:"""print(llm(prompt)) ``` ## Chaining Calls[​](#chaining-calls "Direct link to Chaining Calls") ``` from langchain.chains import LLMChain, SimpleSequentialChainfrom langchain_core.prompts import PromptTemplate ``` ``` llm = EdenAI(feature="text", provider="openai", temperature=0.2, max_tokens=250)text2image = EdenAI(feature="image", provider="openai", resolution="512x512") ``` ``` prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?",)chain = LLMChain(llm=llm, prompt=prompt) ``` ``` second_prompt = PromptTemplate( input_variables=["company_name"], template="Write a description of a logo for this company: {company_name}, the logo should not contain text at all ",)chain_two = LLMChain(llm=llm, prompt=second_prompt) ``` ``` third_prompt = PromptTemplate( input_variables=["company_logo_description"], template="{company_logo_description}",)chain_three = LLMChain(llm=text2image, prompt=third_prompt) ``` ``` # Run the chain specifying only the input variable for the first chain.overall_chain = SimpleSequentialChain( chains=[chain, chain_two, chain_three], verbose=True)output = overall_chain.run("hats") ``` ``` # print the imageprint_base64_image(output) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:13.990Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/edenai/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/edenai/", "description": "Eden AI is revolutionizing the AI landscape by uniting the best AI", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"edenai\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:13 GMT", "etag": "W/\"be6df0f83114690d4b727507ac1a044c\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::dwnwf-1713753613014-746afe7d6acd" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/edenai/", "property": "og:url" }, { "content": "Eden AI | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Eden AI is revolutionizing the AI landscape by uniting the best AI", "property": "og:description" } ], "title": "Eden AI | 🦜️🔗 LangChain" }
Eden AI Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website: https://edenai.co/) This example goes over how to use LangChain to interact with Eden AI models Accessing the EDENAI’s API requires an API key, which you can get by creating an account https://app.edenai.run/user/register and heading here https://app.edenai.run/admin/account/settings Once we have a key we’ll want to set it as an environment variable by running: export EDENAI_API_KEY="..." If you’d prefer not to set an environment variable you can pass the key in directly via the edenai_api_key named parameter when initiating the EdenAI LLM class: from langchain_community.llms import EdenAI llm = EdenAI(edenai_api_key="...", provider="openai", temperature=0.2, max_tokens=250) Calling a model​ The EdenAI API brings together various providers, each offering multiple models. To access a specific model, you can simply add ‘model’ during instantiation. For instance, let’s explore the models provided by OpenAI, such as GPT3.5 text generation​ from langchain.chains import LLMChain from langchain_core.prompts import PromptTemplate llm = EdenAI( feature="text", provider="openai", model="gpt-3.5-turbo-instruct", temperature=0.2, max_tokens=250, ) prompt = """ User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car? Assistant: """ llm(prompt) image generation​ import base64 from io import BytesIO from PIL import Image def print_base64_image(base64_string): # Decode the base64 string into binary data decoded_data = base64.b64decode(base64_string) # Create an in-memory stream to read the binary data image_stream = BytesIO(decoded_data) # Open the image using PIL image = Image.open(image_stream) # Display the image image.show() text2image = EdenAI(feature="image", provider="openai", resolution="512x512") image_output = text2image("A cat riding a motorcycle by Picasso") print_base64_image(image_output) text generation with callback​ from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain_community.llms import EdenAI llm = EdenAI( callbacks=[StreamingStdOutCallbackHandler()], feature="text", provider="openai", temperature=0.2, max_tokens=250, ) prompt = """ User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car? Assistant: """ print(llm(prompt)) Chaining Calls​ from langchain.chains import LLMChain, SimpleSequentialChain from langchain_core.prompts import PromptTemplate llm = EdenAI(feature="text", provider="openai", temperature=0.2, max_tokens=250) text2image = EdenAI(feature="image", provider="openai", resolution="512x512") prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=llm, prompt=prompt) second_prompt = PromptTemplate( input_variables=["company_name"], template="Write a description of a logo for this company: {company_name}, the logo should not contain text at all ", ) chain_two = LLMChain(llm=llm, prompt=second_prompt) third_prompt = PromptTemplate( input_variables=["company_logo_description"], template="{company_logo_description}", ) chain_three = LLMChain(llm=text2image, prompt=third_prompt) # Run the chain specifying only the input variable for the first chain. overall_chain = SimpleSequentialChain( chains=[chain, chain_two, chain_three], verbose=True ) output = overall_chain.run("hats") # print the image print_base64_image(output)
https://python.langchain.com/docs/integrations/llms/friendli/
## Friendli > [Friendli](https://friendli.ai/) enhances AI application performance and optimizes cost savings with scalable, efficient deployment options, tailored for high-demand AI workloads. This tutorial guides you through integrating `Friendli` with LangChain. ## Setup[​](#setup "Direct link to Setup") Ensure the `langchain_community` and `friendli-client` are installed. ``` pip install -U langchain-comminity friendli-client. ``` Sign in to [Friendli Suite](https://suite.friendli.ai/) to create a Personal Access Token, and set it as the `FRIENDLI_TOKEN` environment. ``` import getpassimport osos.environ["FRIENDLI_TOKEN"] = getpass.getpass("Friendi Personal Access Token: ") ``` You can initialize a Friendli chat model with selecting the model you want to use. The default model is `mixtral-8x7b-instruct-v0-1`. You can check the available models at [docs.friendli.ai](https://docs.periflow.ai/guides/serverless_endpoints/pricing#text-generation-models). ``` from langchain_community.llms.friendli import Friendlillm = Friendli(model="mixtral-8x7b-instruct-v0-1", max_tokens=100, temperature=0) ``` ## Usage[​](#usage "Direct link to Usage") `Frienli` supports all methods of [`LLM`](https://python.langchain.com/docs/modules/model_io/llms/) including async APIs. You can use functionality of `invoke`, `batch`, `generate`, and `stream`. ``` llm.invoke("Tell me a joke.") ``` ``` 'Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"' ``` ``` llm.batch(["Tell me a joke.", "Tell me a joke."]) ``` ``` ['Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"', 'Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"'] ``` ``` llm.generate(["Tell me a joke.", "Tell me a joke."]) ``` ``` LLMResult(generations=[[Generation(text='Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"')], [Generation(text='Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"')]], llm_output={'model': 'mixtral-8x7b-instruct-v0-1'}, run=[RunInfo(run_id=UUID('a2009600-baae-4f5a-9f69-23b2bc916e4c')), RunInfo(run_id=UUID('acaf0838-242c-4255-85aa-8a62b675d046'))]) ``` ``` for chunk in llm.stream("Tell me a joke."): print(chunk, end="", flush=True) ``` ``` Username checks out.User 1: I'm not sure if you're being sarcastic or not, but I'll take it as a compliment.User 0: I'm not being sarcastic. I'm just saying that your username is very fitting.User 1: Oh, I thought you were saying that I'm a "dumbass" because I'm a "dumbass" who "checks out" ``` You can also use all functionality of async APIs: `ainvoke`, `abatch`, `agenerate`, and `astream`. ``` await llm.ainvoke("Tell me a joke.") ``` ``` 'Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"' ``` ``` await llm.abatch(["Tell me a joke.", "Tell me a joke."]) ``` ``` ['Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"', 'Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"'] ``` ``` await llm.agenerate(["Tell me a joke.", "Tell me a joke."]) ``` ``` LLMResult(generations=[[Generation(text="Username checks out.\nUser 1: I'm not sure if you're being serious or not, but I'll take it as a compliment.\nUser 0: I'm being serious. I'm not sure if you're being serious or not.\nUser 1: I'm being serious. I'm not sure if you're being serious or not.\nUser 0: I'm being serious. I'm not sure")], [Generation(text="Username checks out.\nUser 1: I'm not sure if you're being serious or not, but I'll take it as a compliment.\nUser 0: I'm being serious. I'm not sure if you're being serious or not.\nUser 1: I'm being serious. I'm not sure if you're being serious or not.\nUser 0: I'm being serious. I'm not sure")]], llm_output={'model': 'mixtral-8x7b-instruct-v0-1'}, run=[RunInfo(run_id=UUID('46144905-7350-4531-a4db-22e6a827c6e3')), RunInfo(run_id=UUID('e2b06c30-ffff-48cf-b792-be91f2144aa6'))]) ``` ``` async for chunk in llm.astream("Tell me a joke."): print(chunk, end="", flush=True) ``` ``` Username checks out.User 1: I'm not sure if you're being sarcastic or not, but I'll take it as a compliment.User 0: I'm not being sarcastic. I'm just saying that your username is very fitting.User 1: Oh, I thought you were saying that I'm a "dumbass" because I'm a "dumbass" who "checks out" ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:14.212Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/friendli/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/friendli/", "description": "Friendli enhances AI application performance", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"friendli\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:13 GMT", "etag": "W/\"0e68682158a6daaea0e0b16ad654a84c\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::qvdxl-1713753613084-13ad4d9792cf" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/friendli/", "property": "og:url" }, { "content": "Friendli | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Friendli enhances AI application performance", "property": "og:description" } ], "title": "Friendli | 🦜️🔗 LangChain" }
Friendli Friendli enhances AI application performance and optimizes cost savings with scalable, efficient deployment options, tailored for high-demand AI workloads. This tutorial guides you through integrating Friendli with LangChain. Setup​ Ensure the langchain_community and friendli-client are installed. pip install -U langchain-comminity friendli-client. Sign in to Friendli Suite to create a Personal Access Token, and set it as the FRIENDLI_TOKEN environment. import getpass import os os.environ["FRIENDLI_TOKEN"] = getpass.getpass("Friendi Personal Access Token: ") You can initialize a Friendli chat model with selecting the model you want to use. The default model is mixtral-8x7b-instruct-v0-1. You can check the available models at docs.friendli.ai. from langchain_community.llms.friendli import Friendli llm = Friendli(model="mixtral-8x7b-instruct-v0-1", max_tokens=100, temperature=0) Usage​ Frienli supports all methods of LLM including async APIs. You can use functionality of invoke, batch, generate, and stream. llm.invoke("Tell me a joke.") 'Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"' llm.batch(["Tell me a joke.", "Tell me a joke."]) ['Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"', 'Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"'] llm.generate(["Tell me a joke.", "Tell me a joke."]) LLMResult(generations=[[Generation(text='Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"')], [Generation(text='Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"')]], llm_output={'model': 'mixtral-8x7b-instruct-v0-1'}, run=[RunInfo(run_id=UUID('a2009600-baae-4f5a-9f69-23b2bc916e4c')), RunInfo(run_id=UUID('acaf0838-242c-4255-85aa-8a62b675d046'))]) for chunk in llm.stream("Tell me a joke."): print(chunk, end="", flush=True) Username checks out. User 1: I'm not sure if you're being sarcastic or not, but I'll take it as a compliment. User 0: I'm not being sarcastic. I'm just saying that your username is very fitting. User 1: Oh, I thought you were saying that I'm a "dumbass" because I'm a "dumbass" who "checks out" You can also use all functionality of async APIs: ainvoke, abatch, agenerate, and astream. await llm.ainvoke("Tell me a joke.") 'Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"' await llm.abatch(["Tell me a joke.", "Tell me a joke."]) ['Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"', 'Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"'] await llm.agenerate(["Tell me a joke.", "Tell me a joke."]) LLMResult(generations=[[Generation(text="Username checks out.\nUser 1: I'm not sure if you're being serious or not, but I'll take it as a compliment.\nUser 0: I'm being serious. I'm not sure if you're being serious or not.\nUser 1: I'm being serious. I'm not sure if you're being serious or not.\nUser 0: I'm being serious. I'm not sure")], [Generation(text="Username checks out.\nUser 1: I'm not sure if you're being serious or not, but I'll take it as a compliment.\nUser 0: I'm being serious. I'm not sure if you're being serious or not.\nUser 1: I'm being serious. I'm not sure if you're being serious or not.\nUser 0: I'm being serious. I'm not sure")]], llm_output={'model': 'mixtral-8x7b-instruct-v0-1'}, run=[RunInfo(run_id=UUID('46144905-7350-4531-a4db-22e6a827c6e3')), RunInfo(run_id=UUID('e2b06c30-ffff-48cf-b792-be91f2144aa6'))]) async for chunk in llm.astream("Tell me a joke."): print(chunk, end="", flush=True) Username checks out. User 1: I'm not sure if you're being sarcastic or not, but I'll take it as a compliment. User 0: I'm not being sarcastic. I'm just saying that your username is very fitting. User 1: Oh, I thought you were saying that I'm a "dumbass" because I'm a "dumbass" who "checks out" Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/llms/deepinfra/
## DeepInfra [DeepInfra](https://deepinfra.com/?utm_source=langchain) is a serverless inference as a service that provides access to a [variety of LLMs](https://deepinfra.com/models?utm_source=langchain) and [embeddings models](https://deepinfra.com/models?type=embeddings&utm_source=langchain). This notebook goes over how to use LangChain with DeepInfra for language models. ## Set the Environment API Key[​](#set-the-environment-api-key "Direct link to Set the Environment API Key") Make sure to get your API key from DeepInfra. You have to [Login](https://deepinfra.com/login?from=%2Fdash) and get a new token. You are given a 1 hour free of serverless GPU compute to test different models. (see [here](https://github.com/deepinfra/deepctl#deepctl)) You can print your token with `deepctl auth token` ``` # get a new token: https://deepinfra.com/login?from=%2Fdashfrom getpass import getpassDEEPINFRA_API_TOKEN = getpass() ``` ``` import osos.environ["DEEPINFRA_API_TOKEN"] = DEEPINFRA_API_TOKEN ``` ## Create the DeepInfra instance[​](#create-the-deepinfra-instance "Direct link to Create the DeepInfra instance") You can also use our open-source [deepctl tool](https://github.com/deepinfra/deepctl#deepctl) to manage your model deployments. You can view a list of available parameters [here](https://deepinfra.com/databricks/dolly-v2-12b#API). ``` from langchain_community.llms import DeepInfrallm = DeepInfra(model_id="meta-llama/Llama-2-70b-chat-hf")llm.model_kwargs = { "temperature": 0.7, "repetition_penalty": 1.2, "max_new_tokens": 250, "top_p": 0.9,} ``` ``` # run inferences directly via wrapperllm("Who let the dogs out?") ``` ``` 'This is a question that has puzzled many people' ``` ``` # run streaming inferencefor chunk in llm.stream("Who let the dogs out?"): print(chunk) ``` ## Create a Prompt Template[​](#create-a-prompt-template "Direct link to Create a Prompt Template") We will create a prompt template for Question and Answer. ``` from langchain_core.prompts import PromptTemplatetemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template) ``` ## Initiate the LLMChain[​](#initiate-the-llmchain "Direct link to Initiate the LLMChain") ``` from langchain.chains import LLMChainllm_chain = LLMChain(prompt=prompt, llm=llm) ``` ## Run the LLMChain[​](#run-the-llmchain "Direct link to Run the LLMChain") Provide a question and run the LLMChain. ``` question = "Can penguins reach the North pole?"llm_chain.run(question) ``` ``` "Penguins are found in Antarctica and the surrounding islands, which are located at the southernmost tip of the planet. The North Pole is located at the northernmost tip of the planet, and it would be a long journey for penguins to get there. In fact, penguins don't have the ability to fly or migrate over such long distances. So, no, penguins cannot reach the North Pole. " ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:13.876Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/deepinfra/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/deepinfra/", "description": "DeepInfra is a serverless", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"deepinfra\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:13 GMT", "etag": "W/\"2e04b4591b39acb7251099a7c848d354\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::9fj28-1713753613074-85b4c1948a83" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/deepinfra/", "property": "og:url" }, { "content": "DeepInfra | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "DeepInfra is a serverless", "property": "og:description" } ], "title": "DeepInfra | 🦜️🔗 LangChain" }
DeepInfra DeepInfra is a serverless inference as a service that provides access to a variety of LLMs and embeddings models. This notebook goes over how to use LangChain with DeepInfra for language models. Set the Environment API Key​ Make sure to get your API key from DeepInfra. You have to Login and get a new token. You are given a 1 hour free of serverless GPU compute to test different models. (see here) You can print your token with deepctl auth token # get a new token: https://deepinfra.com/login?from=%2Fdash from getpass import getpass DEEPINFRA_API_TOKEN = getpass() import os os.environ["DEEPINFRA_API_TOKEN"] = DEEPINFRA_API_TOKEN Create the DeepInfra instance​ You can also use our open-source deepctl tool to manage your model deployments. You can view a list of available parameters here. from langchain_community.llms import DeepInfra llm = DeepInfra(model_id="meta-llama/Llama-2-70b-chat-hf") llm.model_kwargs = { "temperature": 0.7, "repetition_penalty": 1.2, "max_new_tokens": 250, "top_p": 0.9, } # run inferences directly via wrapper llm("Who let the dogs out?") 'This is a question that has puzzled many people' # run streaming inference for chunk in llm.stream("Who let the dogs out?"): print(chunk) Create a Prompt Template​ We will create a prompt template for Question and Answer. from langchain_core.prompts import PromptTemplate template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate.from_template(template) Initiate the LLMChain​ from langchain.chains import LLMChain llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain​ Provide a question and run the LLMChain. question = "Can penguins reach the North pole?" llm_chain.run(question) "Penguins are found in Antarctica and the surrounding islands, which are located at the southernmost tip of the planet. The North Pole is located at the northernmost tip of the planet, and it would be a long journey for penguins to get there. In fact, penguins don't have the ability to fly or migrate over such long distances. So, no, penguins cannot reach the North Pole. "
https://python.langchain.com/docs/integrations/llms/google_ai/
To use Google Generative AI you must install the `langchain-google-genai` Python package and generate an API key. [Read more details](https://developers.generativeai.google/). ``` **Pros of Python:*** **Easy to learn:** Python is a very easy-to-learn programming language, even for beginners. Its syntax is simple and straightforward, and there are a lot of resources available to help you get started.* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, and machine learning. It's also a good choice for beginners because it can be used for a variety of projects, so you can learn the basics and then move on to more complex tasks.* **High-level:** Python is a high-level programming language, which means that it's closer to human language than other programming languages. This makes it easier to read and understand, which can be a big advantage for beginners.* **Open-source:** Python is an open-source programming language, which means that it's free to use and there are a lot of resources available to help you learn it.* **Community:** Python has a large and active community of developers, which means that there are a lot of people who can help you if you get stuck.**Cons of Python:*** **Slow:** Python is a relatively slow programming language compared to some other languages, such as C++. This can be a disadvantage if you're working on computationally intensive tasks.* **Not as performant:** Python is not as performant as some other programming languages, such as C++ or Java. This can be a disadvantage if you're working on projects that require high performance.* **Dynamic typing:** Python is a dynamically typed programming language, which means that the type of a variable can change during runtime. This can be a disadvantage if you need to ensure that your code is type-safe.* **Unmanaged memory:** Python uses a garbage collection system to manage memory. This can be a disadvantage if you need to have more control over memory management.Overall, Python is a very good programming language for beginners. It's easy to learn, versatile, and has a large community of developers. However, it's important to be aware of its limitations, such as its slow performance and lack of performance. ``` ``` **Pros:*** **Simplicity and Readability:** Python is known for its simple and easy-to-read syntax, which makes it accessible to beginners and reduces the chance of errors. It uses indentation to define blocks of code, making the code structure clear and visually appealing.* **Versatility:** Python is a general-purpose language, meaning it can be used for a wide range of tasks, including web development, data science, machine learning, and desktop applications. This versatility makes it a popular choice for various projects and industries.* **Large Community:** Python has a vast and active community of developers, which contributes to its growth and popularity. This community provides extensive documentation, tutorials, and open-source libraries, making it easy for Python developers to find support and resources.* **Extensive Libraries:** Python offers a rich collection of libraries and frameworks for various tasks, such as data analysis (NumPy, Pandas), web development (Django, Flask), machine learning (Scikit-learn, TensorFlow), and many more. These libraries provide pre-built functions and modules, allowing developers to quickly and efficiently solve common problems.* **Cross-Platform Support:** Python is cross-platform, meaning it can run on various operating systems, including Windows, macOS, and Linux. This allows developers to write code that can be easily shared and used across different platforms.**Cons:*** **Speed and Performance:** Python is generally slower than compiled languages like C++ or Java due to its interpreted nature. This can be a disadvantage for performance-intensive tasks, such as real-time systems or heavy numerical computations.* **Memory Usage:** Python programs tend to consume more memory compared to compiled languages. This is because Python uses a dynamic memory allocation system, which can lead to memory fragmentation and higher memory usage.* **Lack of Static Typing:** Python is a dynamically typed language, which means that data types are not explicitly defined for variables. This can make it challenging to detect type errors during development, which can lead to unexpected behavior or errors at runtime.* **GIL (Global Interpreter Lock):** Python uses a global interpreter lock (GIL) to ensure that only one thread can execute Python bytecode at a time. This can limit the scalability and parallelism of Python programs, especially in multi-threaded or multiprocessing scenarios.* **Package Management:** While Python has a vast ecosystem of libraries and packages, managing dependencies and package versions can be challenging. The Python Package Index (PyPI) is the official repository for Python packages, but it can be difficult to ensure compatibility and avoid conflicts between different versions of packages. ``` ``` In winter's embrace, a silent ballet,Snowflakes descend, a celestial display.Whispering secrets, they softly fall,A blanket of white, covering all.With gentle grace, they paint the land,Transforming the world into a winter wonderland.Trees stand adorned in icy splendor,A glistening spectacle, a sight to render.Snowflakes twirl, like dancers on a stage,Creating a symphony, a winter montage.Their silent whispers, a sweet serenade,As they dance and twirl, a snowy cascade.In the hush of dawn, a frosty morn,Snow sparkles bright, like diamonds reborn.Each flake unique, in its own design,A masterpiece crafted by the divine.So let us revel in this wintry bliss,As snowflakes fall, with a gentle kiss.For in their embrace, we find a peace profound,A frozen world, with magic all around. ``` Gemini models have default safety settings that can be overridden. If you are receiving lots of “Safety Warnings” from your models, you can try tweaking the `safety_settings` attribute of the model. For example, to turn off safety blocking for dangerous content, you can construct your LLM as follows: ``` from langchain_google_genai import GoogleGenerativeAI, HarmBlockThreshold, HarmCategoryllm = GoogleGenerativeAI( model="gemini-pro", google_api_key=api_key, safety_settings={ HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE, },) ``` For an enumeration of the categories and thresholds available, see Google’s [safety setting types](https://ai.google.dev/api/python/google/generativeai/types/SafetySettingDict).
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:14.500Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/google_ai/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/google_ai/", "description": "A guide on using [Google Generative", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "2547", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"google_ai\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:13 GMT", "etag": "W/\"9e5b7b4793106fe8da26a5948f9a7b9d\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::2wll9-1713753613450-a257d1a61a35" }, "jsonLd": null, "keywords": "gemini,GoogleGenerativeAI,gemini-pro", "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/google_ai/", "property": "og:url" }, { "content": "Google AI | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "A guide on using [Google Generative", "property": "og:description" } ], "title": "Google AI | 🦜️🔗 LangChain" }
To use Google Generative AI you must install the langchain-google-genai Python package and generate an API key. Read more details. **Pros of Python:** * **Easy to learn:** Python is a very easy-to-learn programming language, even for beginners. Its syntax is simple and straightforward, and there are a lot of resources available to help you get started. * **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, and machine learning. It's also a good choice for beginners because it can be used for a variety of projects, so you can learn the basics and then move on to more complex tasks. * **High-level:** Python is a high-level programming language, which means that it's closer to human language than other programming languages. This makes it easier to read and understand, which can be a big advantage for beginners. * **Open-source:** Python is an open-source programming language, which means that it's free to use and there are a lot of resources available to help you learn it. * **Community:** Python has a large and active community of developers, which means that there are a lot of people who can help you if you get stuck. **Cons of Python:** * **Slow:** Python is a relatively slow programming language compared to some other languages, such as C++. This can be a disadvantage if you're working on computationally intensive tasks. * **Not as performant:** Python is not as performant as some other programming languages, such as C++ or Java. This can be a disadvantage if you're working on projects that require high performance. * **Dynamic typing:** Python is a dynamically typed programming language, which means that the type of a variable can change during runtime. This can be a disadvantage if you need to ensure that your code is type-safe. * **Unmanaged memory:** Python uses a garbage collection system to manage memory. This can be a disadvantage if you need to have more control over memory management. Overall, Python is a very good programming language for beginners. It's easy to learn, versatile, and has a large community of developers. However, it's important to be aware of its limitations, such as its slow performance and lack of performance. **Pros:** * **Simplicity and Readability:** Python is known for its simple and easy-to-read syntax, which makes it accessible to beginners and reduces the chance of errors. It uses indentation to define blocks of code, making the code structure clear and visually appealing. * **Versatility:** Python is a general-purpose language, meaning it can be used for a wide range of tasks, including web development, data science, machine learning, and desktop applications. This versatility makes it a popular choice for various projects and industries. * **Large Community:** Python has a vast and active community of developers, which contributes to its growth and popularity. This community provides extensive documentation, tutorials, and open-source libraries, making it easy for Python developers to find support and resources. * **Extensive Libraries:** Python offers a rich collection of libraries and frameworks for various tasks, such as data analysis (NumPy, Pandas), web development (Django, Flask), machine learning (Scikit-learn, TensorFlow), and many more. These libraries provide pre-built functions and modules, allowing developers to quickly and efficiently solve common problems. * **Cross-Platform Support:** Python is cross-platform, meaning it can run on various operating systems, including Windows, macOS, and Linux. This allows developers to write code that can be easily shared and used across different platforms. **Cons:** * **Speed and Performance:** Python is generally slower than compiled languages like C++ or Java due to its interpreted nature. This can be a disadvantage for performance-intensive tasks, such as real-time systems or heavy numerical computations. * **Memory Usage:** Python programs tend to consume more memory compared to compiled languages. This is because Python uses a dynamic memory allocation system, which can lead to memory fragmentation and higher memory usage. * **Lack of Static Typing:** Python is a dynamically typed language, which means that data types are not explicitly defined for variables. This can make it challenging to detect type errors during development, which can lead to unexpected behavior or errors at runtime. * **GIL (Global Interpreter Lock):** Python uses a global interpreter lock (GIL) to ensure that only one thread can execute Python bytecode at a time. This can limit the scalability and parallelism of Python programs, especially in multi-threaded or multiprocessing scenarios. * **Package Management:** While Python has a vast ecosystem of libraries and packages, managing dependencies and package versions can be challenging. The Python Package Index (PyPI) is the official repository for Python packages, but it can be difficult to ensure compatibility and avoid conflicts between different versions of packages. In winter's embrace, a silent ballet, Snowflakes descend, a celestial display. Whispering secrets, they softly fall, A blanket of white, covering all. With gentle grace, they paint the land, Transforming the world into a winter wonderland. Trees stand adorned in icy splendor, A glistening spectacle, a sight to render. Snowflakes twirl, like dancers on a stage, Creating a symphony, a winter montage. Their silent whispers, a sweet serenade, As they dance and twirl, a snowy cascade. In the hush of dawn, a frosty morn, Snow sparkles bright, like diamonds reborn. Each flake unique, in its own design, A masterpiece crafted by the divine. So let us revel in this wintry bliss, As snowflakes fall, with a gentle kiss. For in their embrace, we find a peace profound, A frozen world, with magic all around. Gemini models have default safety settings that can be overridden. If you are receiving lots of “Safety Warnings” from your models, you can try tweaking the safety_settings attribute of the model. For example, to turn off safety blocking for dangerous content, you can construct your LLM as follows: from langchain_google_genai import GoogleGenerativeAI, HarmBlockThreshold, HarmCategory llm = GoogleGenerativeAI( model="gemini-pro", google_api_key=api_key, safety_settings={ HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE, }, ) For an enumeration of the categories and thresholds available, see Google’s safety setting types.
https://python.langchain.com/docs/integrations/llms/deepsparse/
This page covers how to use the [DeepSparse](https://github.com/neuralmagic/deepsparse) inference runtime within LangChain. It is broken into two parts: installation and setup, and then examples of DeepSparse usage. There exists a DeepSparse LLM wrapper, that provides a unified interface for all models: ``` config = {"max_generated_tokens": 256}llm = DeepSparse( model="zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base-none", config=config,) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:14.975Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/deepsparse/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/deepsparse/", "description": "This page covers how to use the", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3502", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"deepsparse\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:14 GMT", "etag": "W/\"f959949e488fc6f5024a3166bb4400ce\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::k85gt-1713753614299-397d5e2ee836" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/deepsparse/", "property": "og:url" }, { "content": "DeepSparse | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This page covers how to use the", "property": "og:description" } ], "title": "DeepSparse | 🦜️🔗 LangChain" }
This page covers how to use the DeepSparse inference runtime within LangChain. It is broken into two parts: installation and setup, and then examples of DeepSparse usage. There exists a DeepSparse LLM wrapper, that provides a unified interface for all models: config = {"max_generated_tokens": 256} llm = DeepSparse( model="zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base-none", config=config, )
https://python.langchain.com/docs/integrations/llms/gooseai/
## GooseAI `GooseAI` is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to [these models](https://goose.ai/docs/models). This notebook goes over how to use Langchain with [GooseAI](https://goose.ai/). ## Install openai[​](#install-openai "Direct link to Install openai") The `openai` package is required to use the GooseAI API. Install `openai` using `pip install openai`. ``` %pip install --upgrade --quiet langchain-openai ``` ## Imports[​](#imports "Direct link to Imports") ``` import osfrom langchain.chains import LLMChainfrom langchain_community.llms import GooseAIfrom langchain_core.prompts import PromptTemplate ``` ## Set the Environment API Key[​](#set-the-environment-api-key "Direct link to Set the Environment API Key") Make sure to get your API key from GooseAI. You are given \\$10 in free credits to test different models. ``` from getpass import getpassGOOSEAI_API_KEY = getpass() ``` ``` os.environ["GOOSEAI_API_KEY"] = GOOSEAI_API_KEY ``` ## Create the GooseAI instance[​](#create-the-gooseai-instance "Direct link to Create the GooseAI instance") You can specify different parameters such as the model name, max tokens generated, temperature, etc. ## Create a Prompt Template[​](#create-a-prompt-template "Direct link to Create a Prompt Template") We will create a prompt template for Question and Answer. ``` template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template) ``` ## Initiate the LLMChain[​](#initiate-the-llmchain "Direct link to Initiate the LLMChain") ``` llm_chain = LLMChain(prompt=prompt, llm=llm) ``` ## Run the LLMChain[​](#run-the-llmchain "Direct link to Run the LLMChain") Provide a question and run the LLMChain. ``` question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:15.102Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/gooseai/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/gooseai/", "description": "GooseAI is a fully managed NLP-as-a-Service, delivered via API.", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4429", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"gooseai\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:14 GMT", "etag": "W/\"7b230453819cd34b3a6321f408d22192\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::p4nxg-1713753614489-22fe8b24c8ce" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/gooseai/", "property": "og:url" }, { "content": "GooseAI | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "GooseAI is a fully managed NLP-as-a-Service, delivered via API.", "property": "og:description" } ], "title": "GooseAI | 🦜️🔗 LangChain" }
GooseAI GooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models. This notebook goes over how to use Langchain with GooseAI. Install openai​ The openai package is required to use the GooseAI API. Install openai using pip install openai. %pip install --upgrade --quiet langchain-openai Imports​ import os from langchain.chains import LLMChain from langchain_community.llms import GooseAI from langchain_core.prompts import PromptTemplate Set the Environment API Key​ Make sure to get your API key from GooseAI. You are given \$10 in free credits to test different models. from getpass import getpass GOOSEAI_API_KEY = getpass() os.environ["GOOSEAI_API_KEY"] = GOOSEAI_API_KEY Create the GooseAI instance​ You can specify different parameters such as the model name, max tokens generated, temperature, etc. Create a Prompt Template​ We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate.from_template(template) Initiate the LLMChain​ llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain​ Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/llms/gpt4all/
## GPT4All [GitHub:nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all) an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue. This example goes over how to use LangChain to interact with `GPT4All` models. ``` %pip install --upgrade --quiet gpt4all > /dev/null ``` ``` Note: you may need to restart the kernel to use updated packages. ``` ### Import GPT4All[​](#import-gpt4all "Direct link to Import GPT4All") ``` from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerfrom langchain.chains import LLMChainfrom langchain_community.llms import GPT4Allfrom langchain_core.prompts import PromptTemplate ``` ### Set Up Question to pass to LLM[​](#set-up-question-to-pass-to-llm "Direct link to Set Up Question to pass to LLM") ``` template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template) ``` ### Specify Model[​](#specify-model "Direct link to Specify Model") To run locally, download a compatible ggml-formatted model. The [gpt4all page](https://gpt4all.io/index.html) has a useful `Model Explorer` section: * Select a model of interest * Download using the UI and move the `.bin` to the `local_path` (noted below) For more info, visit [https://github.com/nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all). * * * ``` local_path = ( "./models/ggml-gpt4all-l13b-snoozy.bin" # replace with your desired local file path) ``` ``` # Callbacks support token-wise streamingcallbacks = [StreamingStdOutCallbackHandler()]# Verbose is required to pass to the callback managerllm = GPT4All(model=local_path, callbacks=callbacks, verbose=True)# If you want to use a custom model add the backend parameter# Check https://docs.gpt4all.io/gpt4all_python.html for supported backendsllm = GPT4All(model=local_path, backend="gptj", callbacks=callbacks, verbose=True) ``` ``` llm_chain = LLMChain(prompt=prompt, llm=llm) ``` ``` question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"llm_chain.run(question) ``` Justin Bieber was born on March 1, 1994. In 1994, The Cowboys won Super Bowl XXVIII.
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:15.369Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/gpt4all/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/gpt4all/", "description": "GitHub:nomic-ai/gpt4all an", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "6496", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"gpt4all\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:15 GMT", "etag": "W/\"c850c4d84ea828e0920e0a03dda48f3a\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::d7hcd-1713753615204-9588398c91a2" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/gpt4all/", "property": "og:url" }, { "content": "GPT4All | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "GitHub:nomic-ai/gpt4all an", "property": "og:description" } ], "title": "GPT4All | 🦜️🔗 LangChain" }
GPT4All GitHub:nomic-ai/gpt4all an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue. This example goes over how to use LangChain to interact with GPT4All models. %pip install --upgrade --quiet gpt4all > /dev/null Note: you may need to restart the kernel to use updated packages. Import GPT4All​ from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.chains import LLMChain from langchain_community.llms import GPT4All from langchain_core.prompts import PromptTemplate Set Up Question to pass to LLM​ template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate.from_template(template) Specify Model​ To run locally, download a compatible ggml-formatted model. The gpt4all page has a useful Model Explorer section: Select a model of interest Download using the UI and move the .bin to the local_path (noted below) For more info, visit https://github.com/nomic-ai/gpt4all. local_path = ( "./models/ggml-gpt4all-l13b-snoozy.bin" # replace with your desired local file path ) # Callbacks support token-wise streaming callbacks = [StreamingStdOutCallbackHandler()] # Verbose is required to pass to the callback manager llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True) # If you want to use a custom model add the backend parameter # Check https://docs.gpt4all.io/gpt4all_python.html for supported backends llm = GPT4All(model=local_path, backend="gptj", callbacks=callbacks, verbose=True) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Bieber was born?" llm_chain.run(question) Justin Bieber was born on March 1, 1994. In 1994, The Cowboys won Super Bowl XXVIII.
https://python.langchain.com/docs/integrations/llms/gradient/
## Gradient `Gradient` allows to fine tune and get completions on LLMs with a simple web API. This notebook goes over how to use Langchain with [Gradient](https://gradient.ai/). ## Imports[​](#imports "Direct link to Imports") ``` from langchain.chains import LLMChainfrom langchain_community.llms import GradientLLMfrom langchain_core.prompts import PromptTemplate ``` ## Set the Environment API Key[​](#set-the-environment-api-key "Direct link to Set the Environment API Key") Make sure to get your API key from Gradient AI. You are given \\$10 in free credits to test and fine-tune different models. ``` import osfrom getpass import getpassif not os.environ.get("GRADIENT_ACCESS_TOKEN", None): # Access token under https://auth.gradient.ai/select-workspace os.environ["GRADIENT_ACCESS_TOKEN"] = getpass("gradient.ai access token:")if not os.environ.get("GRADIENT_WORKSPACE_ID", None): # `ID` listed in `$ gradient workspace list` # also displayed after login at at https://auth.gradient.ai/select-workspace os.environ["GRADIENT_WORKSPACE_ID"] = getpass("gradient.ai workspace id:") ``` Optional: Validate your Environment variables `GRADIENT_ACCESS_TOKEN` and `GRADIENT_WORKSPACE_ID` to get currently deployed models. Using the `gradientai` Python package. ``` %pip install --upgrade --quiet gradientai ``` ``` Requirement already satisfied: gradientai in /home/michi/.venv/lib/python3.10/site-packages (1.0.0)Requirement already satisfied: aenum>=3.1.11 in /home/michi/.venv/lib/python3.10/site-packages (from gradientai) (3.1.15)Requirement already satisfied: pydantic<2.0.0,>=1.10.5 in /home/michi/.venv/lib/python3.10/site-packages (from gradientai) (1.10.12)Requirement already satisfied: python-dateutil>=2.8.2 in /home/michi/.venv/lib/python3.10/site-packages (from gradientai) (2.8.2)Requirement already satisfied: urllib3>=1.25.3 in /home/michi/.venv/lib/python3.10/site-packages (from gradientai) (1.26.16)Requirement already satisfied: typing-extensions>=4.2.0 in /home/michi/.venv/lib/python3.10/site-packages (from pydantic<2.0.0,>=1.10.5->gradientai) (4.5.0)Requirement already satisfied: six>=1.5 in /home/michi/.venv/lib/python3.10/site-packages (from python-dateutil>=2.8.2->gradientai) (1.16.0) ``` ``` import gradientaiclient = gradientai.Gradient()models = client.list_models(only_base=True)for model in models: print(model.id) ``` ``` 99148c6d-c2a0-4fbe-a4a7-e7c05bdb8a09_base_ml_modelf0b97d96-51a8-4040-8b22-7940ee1fa24e_base_ml_modelcc2dafce-9e6e-4a23-a918-cad6ba89e42e_base_ml_model ``` ``` new_model = models[-1].create_model_adapter(name="my_model_adapter")new_model.id, new_model.name ``` ``` ('674119b5-f19e-4856-add2-767ae7f7d7ef_model_adapter', 'my_model_adapter') ``` ## Create the Gradient instance[​](#create-the-gradient-instance "Direct link to Create the Gradient instance") You can specify different parameters such as the model, max\_tokens generated, temperature, etc. As we later want to fine-tune out model, we select the model\_adapter with the id `674119b5-f19e-4856-add2-767ae7f7d7ef_model_adapter`, but you can use any base or fine-tunable model. ``` llm = GradientLLM( # `ID` listed in `$ gradient model list` model="674119b5-f19e-4856-add2-767ae7f7d7ef_model_adapter", # # optional: set new credentials, they default to environment variables # gradient_workspace_id=os.environ["GRADIENT_WORKSPACE_ID"], # gradient_access_token=os.environ["GRADIENT_ACCESS_TOKEN"], model_kwargs=dict(max_generated_token_count=128),) ``` ## Create a Prompt Template[​](#create-a-prompt-template "Direct link to Create a Prompt Template") We will create a prompt template for Question and Answer. ``` template = """Question: {question}Answer: """prompt = PromptTemplate.from_template(template) ``` ## Initiate the LLMChain[​](#initiate-the-llmchain "Direct link to Initiate the LLMChain") ``` llm_chain = LLMChain(prompt=prompt, llm=llm) ``` ## Run the LLMChain[​](#run-the-llmchain "Direct link to Run the LLMChain") Provide a question and run the LLMChain. ``` question = "What NFL team won the Super Bowl in 1994?"llm_chain.run(question=question) ``` ``` '\nThe San Francisco 49ers won the Super Bowl in 1994.' ``` ## Improve the results by fine-tuning (optional) Well - that is wrong - the San Francisco 49ers did not win. The correct answer to the question would be `The Dallas Cowboys!`. Let’s increase the odds for the correct answer, by fine-tuning on the correct answer using the PromptTemplate. ``` dataset = [ { "inputs": template.format(question="What NFL team won the Super Bowl in 1994?") + " The Dallas Cowboys!" }]dataset ``` ``` [{'inputs': 'Question: What NFL team won the Super Bowl in 1994?\n\nAnswer: The Dallas Cowboys!'}] ``` ``` new_model.fine_tune(samples=dataset) ``` ``` FineTuneResponse(number_of_trainable_tokens=27, sum_loss=78.17996) ``` ``` # we can keep the llm_chain, as the registered model just got refreshed on the gradient.ai servers.llm_chain.run(question=question) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:15.510Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/gradient/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/gradient/", "description": "Gradient allows to fine tune and get completions on LLMs with a simple", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3502", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"gradient\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:15 GMT", "etag": "W/\"c989fee093515559bb4dcce4c6815374\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::4xln7-1713753615298-bbbe722b8eae" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/gradient/", "property": "og:url" }, { "content": "Gradient | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "Gradient allows to fine tune and get completions on LLMs with a simple", "property": "og:description" } ], "title": "Gradient | 🦜️🔗 LangChain" }
Gradient Gradient allows to fine tune and get completions on LLMs with a simple web API. This notebook goes over how to use Langchain with Gradient. Imports​ from langchain.chains import LLMChain from langchain_community.llms import GradientLLM from langchain_core.prompts import PromptTemplate Set the Environment API Key​ Make sure to get your API key from Gradient AI. You are given \$10 in free credits to test and fine-tune different models. import os from getpass import getpass if not os.environ.get("GRADIENT_ACCESS_TOKEN", None): # Access token under https://auth.gradient.ai/select-workspace os.environ["GRADIENT_ACCESS_TOKEN"] = getpass("gradient.ai access token:") if not os.environ.get("GRADIENT_WORKSPACE_ID", None): # `ID` listed in `$ gradient workspace list` # also displayed after login at at https://auth.gradient.ai/select-workspace os.environ["GRADIENT_WORKSPACE_ID"] = getpass("gradient.ai workspace id:") Optional: Validate your Environment variables GRADIENT_ACCESS_TOKEN and GRADIENT_WORKSPACE_ID to get currently deployed models. Using the gradientai Python package. %pip install --upgrade --quiet gradientai Requirement already satisfied: gradientai in /home/michi/.venv/lib/python3.10/site-packages (1.0.0) Requirement already satisfied: aenum>=3.1.11 in /home/michi/.venv/lib/python3.10/site-packages (from gradientai) (3.1.15) Requirement already satisfied: pydantic<2.0.0,>=1.10.5 in /home/michi/.venv/lib/python3.10/site-packages (from gradientai) (1.10.12) Requirement already satisfied: python-dateutil>=2.8.2 in /home/michi/.venv/lib/python3.10/site-packages (from gradientai) (2.8.2) Requirement already satisfied: urllib3>=1.25.3 in /home/michi/.venv/lib/python3.10/site-packages (from gradientai) (1.26.16) Requirement already satisfied: typing-extensions>=4.2.0 in /home/michi/.venv/lib/python3.10/site-packages (from pydantic<2.0.0,>=1.10.5->gradientai) (4.5.0) Requirement already satisfied: six>=1.5 in /home/michi/.venv/lib/python3.10/site-packages (from python-dateutil>=2.8.2->gradientai) (1.16.0) import gradientai client = gradientai.Gradient() models = client.list_models(only_base=True) for model in models: print(model.id) 99148c6d-c2a0-4fbe-a4a7-e7c05bdb8a09_base_ml_model f0b97d96-51a8-4040-8b22-7940ee1fa24e_base_ml_model cc2dafce-9e6e-4a23-a918-cad6ba89e42e_base_ml_model new_model = models[-1].create_model_adapter(name="my_model_adapter") new_model.id, new_model.name ('674119b5-f19e-4856-add2-767ae7f7d7ef_model_adapter', 'my_model_adapter') Create the Gradient instance​ You can specify different parameters such as the model, max_tokens generated, temperature, etc. As we later want to fine-tune out model, we select the model_adapter with the id 674119b5-f19e-4856-add2-767ae7f7d7ef_model_adapter, but you can use any base or fine-tunable model. llm = GradientLLM( # `ID` listed in `$ gradient model list` model="674119b5-f19e-4856-add2-767ae7f7d7ef_model_adapter", # # optional: set new credentials, they default to environment variables # gradient_workspace_id=os.environ["GRADIENT_WORKSPACE_ID"], # gradient_access_token=os.environ["GRADIENT_ACCESS_TOKEN"], model_kwargs=dict(max_generated_token_count=128), ) Create a Prompt Template​ We will create a prompt template for Question and Answer. template = """Question: {question} Answer: """ prompt = PromptTemplate.from_template(template) Initiate the LLMChain​ llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain​ Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in 1994?" llm_chain.run(question=question) '\nThe San Francisco 49ers won the Super Bowl in 1994.' Improve the results by fine-tuning (optional) Well - that is wrong - the San Francisco 49ers did not win. The correct answer to the question would be The Dallas Cowboys!. Let’s increase the odds for the correct answer, by fine-tuning on the correct answer using the PromptTemplate. dataset = [ { "inputs": template.format(question="What NFL team won the Super Bowl in 1994?") + " The Dallas Cowboys!" } ] dataset [{'inputs': 'Question: What NFL team won the Super Bowl in 1994?\n\nAnswer: The Dallas Cowboys!'}] new_model.fine_tune(samples=dataset) FineTuneResponse(number_of_trainable_tokens=27, sum_loss=78.17996) # we can keep the llm_chain, as the registered model just got refreshed on the gradient.ai servers. llm_chain.run(question=question)
https://python.langchain.com/docs/integrations/llms/huggingface_endpoint/
## Huggingface Endpoints > The [Hugging Face Hub](https://huggingface.co/docs/hub/index) is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together. The `Hugging Face Hub` also offers various endpoints to build ML applications. This example showcases how to connect to the different Endpoints types. In particular, text generation inference is powered by [Text Generation Inference](https://github.com/huggingface/text-generation-inference): a custom-built Rust, Python and gRPC server for blazing-faset text generation inference. ``` from langchain_community.llms import HuggingFaceEndpoint ``` ## Installation and Setup[​](#installation-and-setup "Direct link to Installation and Setup") To use, you should have the `huggingface_hub` python [package installed](https://huggingface.co/docs/huggingface_hub/installation). ``` %pip install --upgrade --quiet huggingface_hub ``` ``` # get a token: https://huggingface.co/docs/api-inference/quicktour#get-your-api-tokenfrom getpass import getpassHUGGINGFACEHUB_API_TOKEN = getpass() ``` ``` import osos.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKEN ``` ## Prepare Examples[​](#prepare-examples "Direct link to Prepare Examples") ``` from langchain_community.llms import HuggingFaceEndpoint ``` ``` from langchain.chains import LLMChainfrom langchain_core.prompts import PromptTemplate ``` ``` question = "Who won the FIFA World Cup in the year 1994? "template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template) ``` ## Examples[​](#examples "Direct link to Examples") Here is an example of how you can access `HuggingFaceEndpoint` integration of the free [Serverless Endpoints](https://huggingface.co/inference-endpoints/serverless) API. ``` repo_id = "mistralai/Mistral-7B-Instruct-v0.2"llm = HuggingFaceEndpoint( repo_id=repo_id, max_length=128, temperature=0.5, token=HUGGINGFACEHUB_API_TOKEN)llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question)) ``` ## Dedicated Endpoint[​](#dedicated-endpoint "Direct link to Dedicated Endpoint") The free serverless API lets you implement solutions and iterate in no time, but it may be rate limited for heavy use cases, since the loads are shared with other requests. For enterprise workloads, the best is to use [Inference Endpoints - Dedicated](https://huggingface.co/inference-endpoints/dedicated). This gives access to a fully managed infrastructure that offer more flexibility and speed. These resoucres come with continuous support and uptime guarantees, as well as options like AutoScaling ``` # Set the url to your Inference Endpoint belowyour_endpoint_url = "https://fayjubiy2xqn36z0.us-east-1.aws.endpoints.huggingface.cloud" ``` ``` llm = HuggingFaceEndpoint( endpoint_url=f"{your_endpoint_url}", max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, repetition_penalty=1.03,)llm("What did foo say about bar?") ``` ### Streaming[​](#streaming "Direct link to Streaming") ``` from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerfrom langchain_community.llms import HuggingFaceEndpointllm = HuggingFaceEndpoint( endpoint_url=f"{your_endpoint_url}", max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, repetition_penalty=1.03, streaming=True,)llm("What did foo say about bar?", callbacks=[StreamingStdOutCallbackHandler()]) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:15.859Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/huggingface_endpoint/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/huggingface_endpoint/", "description": "The Hugging Face Hub is a", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "5968", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"huggingface_endpoint\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:15 GMT", "etag": "W/\"93182f8c5211c9886f4301731c3b1b05\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::8tl22-1713753615514-3deb4cd1be2b" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/huggingface_endpoint/", "property": "og:url" }, { "content": "Huggingface Endpoints | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "The Hugging Face Hub is a", "property": "og:description" } ], "title": "Huggingface Endpoints | 🦜️🔗 LangChain" }
Huggingface Endpoints The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together. The Hugging Face Hub also offers various endpoints to build ML applications. This example showcases how to connect to the different Endpoints types. In particular, text generation inference is powered by Text Generation Inference: a custom-built Rust, Python and gRPC server for blazing-faset text generation inference. from langchain_community.llms import HuggingFaceEndpoint Installation and Setup​ To use, you should have the huggingface_hub python package installed. %pip install --upgrade --quiet huggingface_hub # get a token: https://huggingface.co/docs/api-inference/quicktour#get-your-api-token from getpass import getpass HUGGINGFACEHUB_API_TOKEN = getpass() import os os.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKEN Prepare Examples​ from langchain_community.llms import HuggingFaceEndpoint from langchain.chains import LLMChain from langchain_core.prompts import PromptTemplate question = "Who won the FIFA World Cup in the year 1994? " template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate.from_template(template) Examples​ Here is an example of how you can access HuggingFaceEndpoint integration of the free Serverless Endpoints API. repo_id = "mistralai/Mistral-7B-Instruct-v0.2" llm = HuggingFaceEndpoint( repo_id=repo_id, max_length=128, temperature=0.5, token=HUGGINGFACEHUB_API_TOKEN ) llm_chain = LLMChain(prompt=prompt, llm=llm) print(llm_chain.run(question)) Dedicated Endpoint​ The free serverless API lets you implement solutions and iterate in no time, but it may be rate limited for heavy use cases, since the loads are shared with other requests. For enterprise workloads, the best is to use Inference Endpoints - Dedicated. This gives access to a fully managed infrastructure that offer more flexibility and speed. These resoucres come with continuous support and uptime guarantees, as well as options like AutoScaling # Set the url to your Inference Endpoint below your_endpoint_url = "https://fayjubiy2xqn36z0.us-east-1.aws.endpoints.huggingface.cloud" llm = HuggingFaceEndpoint( endpoint_url=f"{your_endpoint_url}", max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, repetition_penalty=1.03, ) llm("What did foo say about bar?") Streaming​ from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain_community.llms import HuggingFaceEndpoint llm = HuggingFaceEndpoint( endpoint_url=f"{your_endpoint_url}", max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, repetition_penalty=1.03, streaming=True, ) llm("What did foo say about bar?", callbacks=[StreamingStdOutCallbackHandler()])
https://python.langchain.com/docs/integrations/llms/ipex_llm/
## IPEX-LLM > [IPEX-LLM](https://github.com/intel-analytics/ipex-llm/) is a low-bit LLM optimization library on Intel XPU (Xeon/Core/Flex/Arc/Max). It can make LLMs run extremely fast and consume much less memory on Intel platforms. It is open sourced under Apache 2.0 License. This example goes over how to use LangChain to interact with IPEX-LLM for text generation. ## Setup[​](#setup "Direct link to Setup") ``` # Update Langchain%pip install -qU langchain langchain-community ``` Install IEPX-LLM for running LLMs locally on Intel CPU. ``` %pip install --pre --upgrade ipex-llm[all] ``` ## Usage[​](#usage "Direct link to Usage") ``` from langchain.chains import LLMChainfrom langchain_community.llms import IpexLLMfrom langchain_core.prompts import PromptTemplate ``` ``` template = "USER: {question}\nASSISTANT:"prompt = PromptTemplate(template=template, input_variables=["question"]) ``` Load Model: ``` llm = IpexLLM.from_model_id( model_id="lmsys/vicuna-7b-v1.5", model_kwargs={"temperature": 0, "max_length": 64, "trust_remote_code": True},) ``` ``` Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] ``` ``` 2024-03-27 00:58:43,670 - INFO - Converting the current model to sym_int4 format...... ``` Use it in Chains: ``` llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What is AI?"output = llm_chain.run(question) ``` ``` /opt/anaconda3/envs/shane-langchain2/lib/python3.9/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead. warn_deprecated(/opt/anaconda3/envs/shane-langchain2/lib/python3.9/site-packages/transformers/generation/utils.py:1369: UserWarning: Using `max_length`'s default (4096) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation. warnings.warn(/opt/anaconda3/envs/shane-langchain2/lib/python3.9/site-packages/ipex_llm/transformers/models/llama.py:218: UserWarning: Passing `padding_mask` is deprecated and will be removed in v4.37.Please make sure use `attention_mask` instead.` warnings.warn(/opt/anaconda3/envs/shane-langchain2/lib/python3.9/site-packages/ipex_llm/transformers/models/llama.py:218: UserWarning: Passing `padding_mask` is deprecated and will be removed in v4.37.Please make sure use `attention_mask` instead.` warnings.warn( ``` ``` huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)AI stands for "Artificial Intelligence." It refers to the development of computer systems that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. AI can be achieved through a combination of techniques such as machine learning, natural language processing, computer vision, and robotics. The ultimate goal of AI research is to create machines that can think and learn like humans, and can even exceed human capabilities in certain areas. ``` * * * #### Help us out by providing feedback on this documentation page:
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:16.008Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/ipex_llm/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/ipex_llm/", "description": "IPEX-LLM is a low-bit", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "6060", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"ipex_llm\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:15 GMT", "etag": "W/\"8a0e9fc14f59681d60e8ba6552656b6e\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::fgt7r-1713753615723-7d37b16ef3c3" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/ipex_llm/", "property": "og:url" }, { "content": "IPEX-LLM | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "IPEX-LLM is a low-bit", "property": "og:description" } ], "title": "IPEX-LLM | 🦜️🔗 LangChain" }
IPEX-LLM IPEX-LLM is a low-bit LLM optimization library on Intel XPU (Xeon/Core/Flex/Arc/Max). It can make LLMs run extremely fast and consume much less memory on Intel platforms. It is open sourced under Apache 2.0 License. This example goes over how to use LangChain to interact with IPEX-LLM for text generation. Setup​ # Update Langchain %pip install -qU langchain langchain-community Install IEPX-LLM for running LLMs locally on Intel CPU. %pip install --pre --upgrade ipex-llm[all] Usage​ from langchain.chains import LLMChain from langchain_community.llms import IpexLLM from langchain_core.prompts import PromptTemplate template = "USER: {question}\nASSISTANT:" prompt = PromptTemplate(template=template, input_variables=["question"]) Load Model: llm = IpexLLM.from_model_id( model_id="lmsys/vicuna-7b-v1.5", model_kwargs={"temperature": 0, "max_length": 64, "trust_remote_code": True}, ) Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] 2024-03-27 00:58:43,670 - INFO - Converting the current model to sym_int4 format...... Use it in Chains: llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What is AI?" output = llm_chain.run(question) /opt/anaconda3/envs/shane-langchain2/lib/python3.9/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead. warn_deprecated( /opt/anaconda3/envs/shane-langchain2/lib/python3.9/site-packages/transformers/generation/utils.py:1369: UserWarning: Using `max_length`'s default (4096) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation. warnings.warn( /opt/anaconda3/envs/shane-langchain2/lib/python3.9/site-packages/ipex_llm/transformers/models/llama.py:218: UserWarning: Passing `padding_mask` is deprecated and will be removed in v4.37.Please make sure use `attention_mask` instead.` warnings.warn( /opt/anaconda3/envs/shane-langchain2/lib/python3.9/site-packages/ipex_llm/transformers/models/llama.py:218: UserWarning: Passing `padding_mask` is deprecated and will be removed in v4.37.Please make sure use `attention_mask` instead.` warnings.warn( huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) AI stands for "Artificial Intelligence." It refers to the development of computer systems that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. AI can be achieved through a combination of techniques such as machine learning, natural language processing, computer vision, and robotics. The ultimate goal of AI research is to create machines that can think and learn like humans, and can even exceed human capabilities in certain areas. Help us out by providing feedback on this documentation page:
https://python.langchain.com/docs/integrations/llms/koboldai/
[KoboldAI](https://github.com/KoboldAI/KoboldAI-Client) is a “a browser-based front-end for AI-assisted writing with multiple local & remote AI models…”. It has a public and local API that is able to be used in langchain. This example goes over how to use LangChain with that API. Documentation can be found in the browser adding /api to the end of your endpoint (i.e [http://127.0.0.1/:5000/api](http://127.0.0.1/:5000/api)). Replace the endpoint seen below with the one shown in the output after starting the webui with –api or –public-api ``` llm = KoboldApiLLM(endpoint="http://192.168.1.144:5000", max_length=80) ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:16.591Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/koboldai/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/koboldai/", "description": "KoboldAI is a “a", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3502", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"koboldai\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:15 GMT", "etag": "W/\"5145e54fe5ba6775e358439692d74860\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::ptbzf-1713753615846-b336ef341864" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/koboldai/", "property": "og:url" }, { "content": "KoboldAI API | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "KoboldAI is a “a", "property": "og:description" } ], "title": "KoboldAI API | 🦜️🔗 LangChain" }
KoboldAI is a “a browser-based front-end for AI-assisted writing with multiple local & remote AI models…”. It has a public and local API that is able to be used in langchain. This example goes over how to use LangChain with that API. Documentation can be found in the browser adding /api to the end of your endpoint (i.e http://127.0.0.1/:5000/api). Replace the endpoint seen below with the one shown in the output after starting the webui with –api or –public-api llm = KoboldApiLLM(endpoint="http://192.168.1.144:5000", max_length=80)
https://python.langchain.com/docs/integrations/llms/javelin/
## Javelin AI Gateway Tutorial This Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK. The Javelin AI Gateway facilitates the utilization of large language models (LLMs) like OpenAI, Cohere, Anthropic, and others by providing a secure and unified endpoint. The gateway itself provides a centralized mechanism to roll out models systematically, provide access security, policy & cost guardrails for enterprises, etc., For a complete listing of all the features & benefits of Javelin, please visit [www.getjavelin.io](http://www.getjavelin.io/) ## Step 1: Introduction[​](#step-1-introduction "Direct link to Step 1: Introduction") [The Javelin AI Gateway](https://www.getjavelin.io/) is an enterprise-grade API Gateway for AI applications. It integrates robust access security, ensuring secure interactions with large language models. Learn more in the [official documentation](https://docs.getjavelin.io/). ## Step 2: Installation[​](#step-2-installation "Direct link to Step 2: Installation") Before we begin, we must install the `javelin_sdk` and set up the Javelin API key as an environment variable. ``` pip install 'javelin_sdk' ``` ``` Requirement already satisfied: javelin_sdk in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (0.1.8)Requirement already satisfied: httpx<0.25.0,>=0.24.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from javelin_sdk) (0.24.1)Requirement already satisfied: pydantic<2.0.0,>=1.10.7 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from javelin_sdk) (1.10.12)Requirement already satisfied: certifi in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (2023.5.7)Requirement already satisfied: httpcore<0.18.0,>=0.15.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (0.17.3)Requirement already satisfied: idna in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (3.4)Requirement already satisfied: sniffio in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (1.3.0)Requirement already satisfied: typing-extensions>=4.2.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from pydantic<2.0.0,>=1.10.7->javelin_sdk) (4.7.1)Requirement already satisfied: h11<0.15,>=0.13 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpcore<0.18.0,>=0.15.0->httpx<0.25.0,>=0.24.0->javelin_sdk) (0.14.0)Requirement already satisfied: anyio<5.0,>=3.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpcore<0.18.0,>=0.15.0->httpx<0.25.0,>=0.24.0->javelin_sdk) (3.7.1)Note: you may need to restart the kernel to use updated packages. ``` ## Step 3: Completions Example[​](#step-3-completions-example "Direct link to Step 3: Completions Example") This section will demonstrate how to interact with the Javelin AI Gateway to get completions from a large language model. Here is a Python script that demonstrates this: (note) assumes that you have setup a route in the gateway called ‘eng\_dept03’ ``` from langchain.chains import LLMChainfrom langchain_community.llms import JavelinAIGatewayfrom langchain_core.prompts import PromptTemplateroute_completions = "eng_dept03"gateway = JavelinAIGateway( gateway_uri="http://localhost:8000", # replace with service URL or host/port of Javelin route=route_completions, model_name="gpt-3.5-turbo-instruct",)prompt = PromptTemplate("Translate the following English text to French: {text}")llmchain = LLMChain(llm=gateway, prompt=prompt)result = llmchain.run("podcast player")print(result) ``` ``` ImportError: cannot import name 'JavelinAIGateway' from 'langchain.llms' (/usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages/langchain/llms/__init__.py) ``` ## Step 4: Embeddings Example This section demonstrates how to use the Javelin AI Gateway to obtain embeddings for text queries and documents. Here is a Python script that illustrates this: (note) assumes that you have setup a route in the gateway called ‘embeddings’ ``` from langchain_community.embeddings import JavelinAIGatewayEmbeddingsembeddings = JavelinAIGatewayEmbeddings( gateway_uri="http://localhost:8000", # replace with service URL or host/port of Javelin route="embeddings",)print(embeddings.embed_query("hello"))print(embeddings.embed_documents(["hello"])) ``` ``` ImportError: cannot import name 'JavelinAIGatewayEmbeddings' from 'langchain.embeddings' (/usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages/langchain/embeddings/__init__.py) ``` ## Step 5: Chat Example This section illustrates how to interact with the Javelin AI Gateway to facilitate a chat with a large language model. Here is a Python script that demonstrates this: (note) assumes that you have setup a route in the gateway called ‘mychatbot\_route’ ``` from langchain_community.chat_models import ChatJavelinAIGatewayfrom langchain_core.messages import HumanMessage, SystemMessagemessages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Artificial Intelligence has the power to transform humanity and make the world a better place" ),]chat = ChatJavelinAIGateway( gateway_uri="http://localhost:8000", # replace with service URL or host/port of Javelin route="mychatbot_route", model_name="gpt-3.5-turbo", params={"temperature": 0.1},)print(chat(messages)) ``` ``` ImportError: cannot import name 'ChatJavelinAIGateway' from 'langchain.chat_models' (/usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages/langchain/chat_models/__init__.py) ``` Step 6: Conclusion This tutorial introduced the Javelin AI Gateway and demonstrated how to interact with it using the Python SDK. Remember to check the Javelin [Python SDK](https://www.github.com/getjavelin.io/javelin-python) for more examples and to explore the official documentation for additional details. Happy coding!
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:16.167Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/javelin/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/javelin/", "description": "This Jupyter Notebook will explore how to interact with the Javelin AI", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "0", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"javelin\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:15 GMT", "etag": "W/\"32bd6e7c7aae6d571e9af0aab7f5cc27\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::nvx8d-1713753615720-f4e5df5ca713" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/javelin/", "property": "og:url" }, { "content": "Javelin AI Gateway Tutorial | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "This Jupyter Notebook will explore how to interact with the Javelin AI", "property": "og:description" } ], "title": "Javelin AI Gateway Tutorial | 🦜️🔗 LangChain" }
Javelin AI Gateway Tutorial This Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK. The Javelin AI Gateway facilitates the utilization of large language models (LLMs) like OpenAI, Cohere, Anthropic, and others by providing a secure and unified endpoint. The gateway itself provides a centralized mechanism to roll out models systematically, provide access security, policy & cost guardrails for enterprises, etc., For a complete listing of all the features & benefits of Javelin, please visit www.getjavelin.io Step 1: Introduction​ The Javelin AI Gateway is an enterprise-grade API Gateway for AI applications. It integrates robust access security, ensuring secure interactions with large language models. Learn more in the official documentation. Step 2: Installation​ Before we begin, we must install the javelin_sdk and set up the Javelin API key as an environment variable. pip install 'javelin_sdk' Requirement already satisfied: javelin_sdk in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (0.1.8) Requirement already satisfied: httpx<0.25.0,>=0.24.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from javelin_sdk) (0.24.1) Requirement already satisfied: pydantic<2.0.0,>=1.10.7 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from javelin_sdk) (1.10.12) Requirement already satisfied: certifi in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (2023.5.7) Requirement already satisfied: httpcore<0.18.0,>=0.15.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (0.17.3) Requirement already satisfied: idna in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (3.4) Requirement already satisfied: sniffio in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (1.3.0) Requirement already satisfied: typing-extensions>=4.2.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from pydantic<2.0.0,>=1.10.7->javelin_sdk) (4.7.1) Requirement already satisfied: h11<0.15,>=0.13 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpcore<0.18.0,>=0.15.0->httpx<0.25.0,>=0.24.0->javelin_sdk) (0.14.0) Requirement already satisfied: anyio<5.0,>=3.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpcore<0.18.0,>=0.15.0->httpx<0.25.0,>=0.24.0->javelin_sdk) (3.7.1) Note: you may need to restart the kernel to use updated packages. Step 3: Completions Example​ This section will demonstrate how to interact with the Javelin AI Gateway to get completions from a large language model. Here is a Python script that demonstrates this: (note) assumes that you have setup a route in the gateway called ‘eng_dept03’ from langchain.chains import LLMChain from langchain_community.llms import JavelinAIGateway from langchain_core.prompts import PromptTemplate route_completions = "eng_dept03" gateway = JavelinAIGateway( gateway_uri="http://localhost:8000", # replace with service URL or host/port of Javelin route=route_completions, model_name="gpt-3.5-turbo-instruct", ) prompt = PromptTemplate("Translate the following English text to French: {text}") llmchain = LLMChain(llm=gateway, prompt=prompt) result = llmchain.run("podcast player") print(result) ImportError: cannot import name 'JavelinAIGateway' from 'langchain.llms' (/usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages/langchain/llms/__init__.py) Step 4: Embeddings Example This section demonstrates how to use the Javelin AI Gateway to obtain embeddings for text queries and documents. Here is a Python script that illustrates this: (note) assumes that you have setup a route in the gateway called ‘embeddings’ from langchain_community.embeddings import JavelinAIGatewayEmbeddings embeddings = JavelinAIGatewayEmbeddings( gateway_uri="http://localhost:8000", # replace with service URL or host/port of Javelin route="embeddings", ) print(embeddings.embed_query("hello")) print(embeddings.embed_documents(["hello"])) ImportError: cannot import name 'JavelinAIGatewayEmbeddings' from 'langchain.embeddings' (/usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages/langchain/embeddings/__init__.py) Step 5: Chat Example This section illustrates how to interact with the Javelin AI Gateway to facilitate a chat with a large language model. Here is a Python script that demonstrates this: (note) assumes that you have setup a route in the gateway called ‘mychatbot_route’ from langchain_community.chat_models import ChatJavelinAIGateway from langchain_core.messages import HumanMessage, SystemMessage messages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Artificial Intelligence has the power to transform humanity and make the world a better place" ), ] chat = ChatJavelinAIGateway( gateway_uri="http://localhost:8000", # replace with service URL or host/port of Javelin route="mychatbot_route", model_name="gpt-3.5-turbo", params={"temperature": 0.1}, ) print(chat(messages)) ImportError: cannot import name 'ChatJavelinAIGateway' from 'langchain.chat_models' (/usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages/langchain/chat_models/__init__.py) Step 6: Conclusion This tutorial introduced the Javelin AI Gateway and demonstrated how to interact with it using the Python SDK. Remember to check the Javelin Python SDK for more examples and to explore the official documentation for additional details. Happy coding!
https://python.langchain.com/docs/integrations/llms/jsonformer_experimental/
## JSONFormer [JSONFormer](https://github.com/1rgs/jsonformer) is a library that wraps local Hugging Face pipeline models for structured decoding of a subset of the JSON Schema. It works by filling in the structure tokens and then sampling the content tokens from the model. **Warning - this module is still experimental** ``` %pip install --upgrade --quiet jsonformer > /dev/null ``` ### Hugging Face Baseline[​](#hugging-face-baseline "Direct link to Hugging Face Baseline") First, let’s establish a qualitative baseline by checking the output of the model without structured decoding. ``` import logginglogging.basicConfig(level=logging.ERROR) ``` ``` import jsonimport osimport requestsfrom langchain.tools import toolHF_TOKEN = os.environ.get("HUGGINGFACE_API_KEY")@tooldef ask_star_coder(query: str, temperature: float = 1.0, max_new_tokens: float = 250): """Query the BigCode StarCoder model about coding questions.""" url = "https://api-inference.huggingface.co/models/bigcode/starcoder" headers = { "Authorization": f"Bearer {HF_TOKEN}", "content-type": "application/json", } payload = { "inputs": f"{query}\n\nAnswer:", "temperature": temperature, "max_new_tokens": int(max_new_tokens), } response = requests.post(url, headers=headers, data=json.dumps(payload)) response.raise_for_status() return json.loads(response.content.decode("utf-8")) ``` ``` prompt = """You must respond using JSON format, with a single action and single action input.You may 'ask_star_coder' for help on coding problems.{arg_schema}EXAMPLES----Human: "So what's all this about a GIL?"AI Assistant:{{ "action": "ask_star_coder", "action_input": {{"query": "What is a GIL?", "temperature": 0.0, "max_new_tokens": 100}}"}}Observation: "The GIL is python's Global Interpreter Lock"Human: "Could you please write a calculator program in LISP?"AI Assistant:{{ "action": "ask_star_coder", "action_input": {{"query": "Write a calculator program in LISP", "temperature": 0.0, "max_new_tokens": 250}}}}Observation: "(defun add (x y) (+ x y))\n(defun sub (x y) (- x y ))"Human: "What's the difference between an SVM and an LLM?"AI Assistant:{{ "action": "ask_star_coder", "action_input": {{"query": "What's the difference between SGD and an SVM?", "temperature": 1.0, "max_new_tokens": 250}}}}Observation: "SGD stands for stochastic gradient descent, while an SVM is a Support Vector Machine."BEGIN! Answer the Human's question as best as you are able.------Human: 'What's the difference between an iterator and an iterable?'AI Assistant:""".format(arg_schema=ask_star_coder.args) ``` ``` from langchain_community.llms import HuggingFacePipelinefrom transformers import pipelinehf_model = pipeline( "text-generation", model="cerebras/Cerebras-GPT-590M", max_new_tokens=200)original_model = HuggingFacePipeline(pipeline=hf_model)generated = original_model.predict(prompt, stop=["Observation:", "Human:"])print(generated) ``` ``` Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. ``` ``` 'What's the difference between an iterator and an iterable?' ``` **_That’s not so impressive, is it? It didn’t follow the JSON format at all! Let’s try with the structured decoder._** ## JSONFormer LLM Wrapper[​](#jsonformer-llm-wrapper "Direct link to JSONFormer LLM Wrapper") Let’s try that again, now providing a the Action input’s JSON Schema to the model. ``` decoder_schema = { "title": "Decoding Schema", "type": "object", "properties": { "action": {"type": "string", "default": ask_star_coder.name}, "action_input": { "type": "object", "properties": ask_star_coder.args, }, },} ``` ``` from langchain_experimental.llms import JsonFormerjson_former = JsonFormer(json_schema=decoder_schema, pipeline=hf_model) ``` ``` results = json_former.predict(prompt, stop=["Observation:", "Human:"])print(results) ``` ``` {"action": "ask_star_coder", "action_input": {"query": "What's the difference between an iterator and an iter", "temperature": 0.0, "max_new_tokens": 50.0}} ``` **Voila! Free of parsing errors.**
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:16.315Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/jsonformer_experimental/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/jsonformer_experimental/", "description": "JSONFormer is a library that wraps", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "5438", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"jsonformer_experimental\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:15 GMT", "etag": "W/\"993eb5f081b559bf855e72184997111e\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "iad1::fmkmq-1713753615723-e15878bd0a6d" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/jsonformer_experimental/", "property": "og:url" }, { "content": "JSONFormer | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "JSONFormer is a library that wraps", "property": "og:description" } ], "title": "JSONFormer | 🦜️🔗 LangChain" }
JSONFormer JSONFormer is a library that wraps local Hugging Face pipeline models for structured decoding of a subset of the JSON Schema. It works by filling in the structure tokens and then sampling the content tokens from the model. Warning - this module is still experimental %pip install --upgrade --quiet jsonformer > /dev/null Hugging Face Baseline​ First, let’s establish a qualitative baseline by checking the output of the model without structured decoding. import logging logging.basicConfig(level=logging.ERROR) import json import os import requests from langchain.tools import tool HF_TOKEN = os.environ.get("HUGGINGFACE_API_KEY") @tool def ask_star_coder(query: str, temperature: float = 1.0, max_new_tokens: float = 250): """Query the BigCode StarCoder model about coding questions.""" url = "https://api-inference.huggingface.co/models/bigcode/starcoder" headers = { "Authorization": f"Bearer {HF_TOKEN}", "content-type": "application/json", } payload = { "inputs": f"{query}\n\nAnswer:", "temperature": temperature, "max_new_tokens": int(max_new_tokens), } response = requests.post(url, headers=headers, data=json.dumps(payload)) response.raise_for_status() return json.loads(response.content.decode("utf-8")) prompt = """You must respond using JSON format, with a single action and single action input. You may 'ask_star_coder' for help on coding problems. {arg_schema} EXAMPLES ---- Human: "So what's all this about a GIL?" AI Assistant:{{ "action": "ask_star_coder", "action_input": {{"query": "What is a GIL?", "temperature": 0.0, "max_new_tokens": 100}}" }} Observation: "The GIL is python's Global Interpreter Lock" Human: "Could you please write a calculator program in LISP?" AI Assistant:{{ "action": "ask_star_coder", "action_input": {{"query": "Write a calculator program in LISP", "temperature": 0.0, "max_new_tokens": 250}} }} Observation: "(defun add (x y) (+ x y))\n(defun sub (x y) (- x y ))" Human: "What's the difference between an SVM and an LLM?" AI Assistant:{{ "action": "ask_star_coder", "action_input": {{"query": "What's the difference between SGD and an SVM?", "temperature": 1.0, "max_new_tokens": 250}} }} Observation: "SGD stands for stochastic gradient descent, while an SVM is a Support Vector Machine." BEGIN! Answer the Human's question as best as you are able. ------ Human: 'What's the difference between an iterator and an iterable?' AI Assistant:""".format(arg_schema=ask_star_coder.args) from langchain_community.llms import HuggingFacePipeline from transformers import pipeline hf_model = pipeline( "text-generation", model="cerebras/Cerebras-GPT-590M", max_new_tokens=200 ) original_model = HuggingFacePipeline(pipeline=hf_model) generated = original_model.predict(prompt, stop=["Observation:", "Human:"]) print(generated) Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. 'What's the difference between an iterator and an iterable?' That’s not so impressive, is it? It didn’t follow the JSON format at all! Let’s try with the structured decoder. JSONFormer LLM Wrapper​ Let’s try that again, now providing a the Action input’s JSON Schema to the model. decoder_schema = { "title": "Decoding Schema", "type": "object", "properties": { "action": {"type": "string", "default": ask_star_coder.name}, "action_input": { "type": "object", "properties": ask_star_coder.args, }, }, } from langchain_experimental.llms import JsonFormer json_former = JsonFormer(json_schema=decoder_schema, pipeline=hf_model) results = json_former.predict(prompt, stop=["Observation:", "Human:"]) print(results) {"action": "ask_star_coder", "action_input": {"query": "What's the difference between an iterator and an iter", "temperature": 0.0, "max_new_tokens": 50.0}} Voila! Free of parsing errors.
https://python.langchain.com/docs/integrations/llms/ibm_watsonx/
## IBM watsonx.ai > [WatsonxLLM](https://ibm.github.io/watsonx-ai-python-sdk/fm_extensions.html#langchain) is a wrapper for IBM [watsonx.ai](https://www.ibm.com/products/watsonx-ai) foundation models. This example shows how to communicate with `watsonx.ai` models using `LangChain`. ## Setting up[​](#setting-up "Direct link to Setting up") Install the package `langchain-ibm`. ``` !pip install -qU langchain-ibm ``` This cell defines the WML credentials required to work with watsonx Foundation Model inferencing. **Action:** Provide the IBM Cloud user API key. For details, see [documentation](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui). ``` import osfrom getpass import getpasswatsonx_api_key = getpass()os.environ["WATSONX_APIKEY"] = watsonx_api_key ``` Additionaly you are able to pass additional secrets as an environment variable. ``` import osos.environ["WATSONX_URL"] = "your service instance url"os.environ["WATSONX_TOKEN"] = "your token for accessing the CPD cluster"os.environ["WATSONX_PASSWORD"] = "your password for accessing the CPD cluster"os.environ["WATSONX_USERNAME"] = "your username for accessing the CPD cluster"os.environ["WATSONX_INSTANCE_ID"] = "your instance_id for accessing the CPD cluster" ``` ## Load the model[​](#load-the-model "Direct link to Load the model") You might need to adjust model `parameters` for different models or tasks. For details, refer to [documentation](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#metanames.GenTextParamsMetaNames). ``` parameters = { "decoding_method": "sample", "max_new_tokens": 100, "min_new_tokens": 1, "temperature": 0.5, "top_k": 50, "top_p": 1,} ``` Initialize the `WatsonxLLM` class with previously set parameters. **Note**: * To provide context for the API call, you must add `project_id` or `space_id`. For more information see [documentation](https://www.ibm.com/docs/en/watsonx-as-a-service?topic=projects). * Depending on the region of your provisioned service instance, use one of the urls described [here](https://ibm.github.io/watsonx-ai-python-sdk/setup_cloud.html#authentication). In this example, we’ll use the `project_id` and Dallas url. You need to specify `model_id` that will be used for inferencing. All available models you can find in [documentation](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#ibm_watsonx_ai.foundation_models.utils.enums.ModelTypes). ``` from langchain_ibm import WatsonxLLMwatsonx_llm = WatsonxLLM( model_id="ibm/granite-13b-instruct-v2", url="https://us-south.ml.cloud.ibm.com", project_id="PASTE YOUR PROJECT_ID HERE", params=parameters,) ``` Alternatively you can use Cloud Pak for Data credentials. For details, see [documentation](https://ibm.github.io/watsonx-ai-python-sdk/setup_cpd.html). ``` watsonx_llm = WatsonxLLM( model_id="ibm/granite-13b-instruct-v2", url="PASTE YOUR URL HERE", username="PASTE YOUR USERNAME HERE", password="PASTE YOUR PASSWORD HERE", instance_id="openshift", version="4.8", project_id="PASTE YOUR PROJECT_ID HERE", params=parameters,) ``` Instead of `model_id`, you can also pass the `deployment_id` of the previously tuned model. The entire model tuning workflow is described [here](https://ibm.github.io/watsonx-ai-python-sdk/pt_working_with_class_and_prompt_tuner.html). ``` watsonx_llm = WatsonxLLM( deployment_id="PASTE YOUR DEPLOYMENT_ID HERE", url="https://us-south.ml.cloud.ibm.com", project_id="PASTE YOUR PROJECT_ID HERE", params=parameters,) ``` ## Create Chain[​](#create-chain "Direct link to Create Chain") Create `PromptTemplate` objects which will be responsible for creating a random question. ``` from langchain_core.prompts import PromptTemplatetemplate = "Generate a random question about {topic}: Question: "prompt = PromptTemplate.from_template(template) ``` Provide a topic and run the `LLMChain`. ``` from langchain.chains import LLMChainllm_chain = LLMChain(prompt=prompt, llm=watsonx_llm)llm_chain.invoke("dog") ``` ``` {'topic': 'dog', 'text': 'Why do dogs howl?'} ``` ## Calling the Model Directly[​](#calling-the-model-directly "Direct link to Calling the Model Directly") To obtain completions, you can call the model directly using a string prompt. ``` # Calling a single promptwatsonx_llm.invoke("Who is man's best friend?") ``` ``` "Man's best friend is his dog. " ``` ``` # Calling multiple promptswatsonx_llm.generate( [ "The fastest dog in the world?", "Describe your chosen dog breed", ]) ``` ``` LLMResult(generations=[[Generation(text='The fastest dog in the world is the greyhound, which can run up to 45 miles per hour. This is about the same speed as a human running down a track. Greyhounds are very fast because they have long legs, a streamlined body, and a strong tail. They can run this fast for short distances, but they can also run for long distances, like a marathon. ', generation_info={'finish_reason': 'eos_token'})], [Generation(text='The Beagle is a scent hound, meaning it is bred to hunt by following a trail of scents.', generation_info={'finish_reason': 'eos_token'})]], llm_output={'token_usage': {'generated_token_count': 106, 'input_token_count': 13}, 'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': ''}, run=[RunInfo(run_id=UUID('52cb421d-b63f-4c5f-9b04-d4770c664725')), RunInfo(run_id=UUID('df2ea606-1622-4ed7-8d5d-8f6e068b71c4'))]) ``` ## Streaming the Model output[​](#streaming-the-model-output "Direct link to Streaming the Model output") You can stream the model output. ``` for chunk in watsonx_llm.stream( "Describe your favorite breed of dog and why it is your favorite."): print(chunk, end="") ``` ``` My favorite breed of dog is a Labrador Retriever. Labradors are my favorite because they are extremely smart, very friendly, and love to be with people. They are also very playful and love to run around and have a lot of energy. ```
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:16.724Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/ibm_watsonx/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/ibm_watsonx/", "description": "WatsonxLLM", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "4430", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"ibm_watsonx\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:15 GMT", "etag": "W/\"019631230c201a4362b2be68285e2556\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "sfo1::jgllw-1713753615819-61c570e803c4" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/ibm_watsonx/", "property": "og:url" }, { "content": "IBM watsonx.ai | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "WatsonxLLM", "property": "og:description" } ], "title": "IBM watsonx.ai | 🦜️🔗 LangChain" }
IBM watsonx.ai WatsonxLLM is a wrapper for IBM watsonx.ai foundation models. This example shows how to communicate with watsonx.ai models using LangChain. Setting up​ Install the package langchain-ibm. !pip install -qU langchain-ibm This cell defines the WML credentials required to work with watsonx Foundation Model inferencing. Action: Provide the IBM Cloud user API key. For details, see documentation. import os from getpass import getpass watsonx_api_key = getpass() os.environ["WATSONX_APIKEY"] = watsonx_api_key Additionaly you are able to pass additional secrets as an environment variable. import os os.environ["WATSONX_URL"] = "your service instance url" os.environ["WATSONX_TOKEN"] = "your token for accessing the CPD cluster" os.environ["WATSONX_PASSWORD"] = "your password for accessing the CPD cluster" os.environ["WATSONX_USERNAME"] = "your username for accessing the CPD cluster" os.environ["WATSONX_INSTANCE_ID"] = "your instance_id for accessing the CPD cluster" Load the model​ You might need to adjust model parameters for different models or tasks. For details, refer to documentation. parameters = { "decoding_method": "sample", "max_new_tokens": 100, "min_new_tokens": 1, "temperature": 0.5, "top_k": 50, "top_p": 1, } Initialize the WatsonxLLM class with previously set parameters. Note: To provide context for the API call, you must add project_id or space_id. For more information see documentation. Depending on the region of your provisioned service instance, use one of the urls described here. In this example, we’ll use the project_id and Dallas url. You need to specify model_id that will be used for inferencing. All available models you can find in documentation. from langchain_ibm import WatsonxLLM watsonx_llm = WatsonxLLM( model_id="ibm/granite-13b-instruct-v2", url="https://us-south.ml.cloud.ibm.com", project_id="PASTE YOUR PROJECT_ID HERE", params=parameters, ) Alternatively you can use Cloud Pak for Data credentials. For details, see documentation. watsonx_llm = WatsonxLLM( model_id="ibm/granite-13b-instruct-v2", url="PASTE YOUR URL HERE", username="PASTE YOUR USERNAME HERE", password="PASTE YOUR PASSWORD HERE", instance_id="openshift", version="4.8", project_id="PASTE YOUR PROJECT_ID HERE", params=parameters, ) Instead of model_id, you can also pass the deployment_id of the previously tuned model. The entire model tuning workflow is described here. watsonx_llm = WatsonxLLM( deployment_id="PASTE YOUR DEPLOYMENT_ID HERE", url="https://us-south.ml.cloud.ibm.com", project_id="PASTE YOUR PROJECT_ID HERE", params=parameters, ) Create Chain​ Create PromptTemplate objects which will be responsible for creating a random question. from langchain_core.prompts import PromptTemplate template = "Generate a random question about {topic}: Question: " prompt = PromptTemplate.from_template(template) Provide a topic and run the LLMChain. from langchain.chains import LLMChain llm_chain = LLMChain(prompt=prompt, llm=watsonx_llm) llm_chain.invoke("dog") {'topic': 'dog', 'text': 'Why do dogs howl?'} Calling the Model Directly​ To obtain completions, you can call the model directly using a string prompt. # Calling a single prompt watsonx_llm.invoke("Who is man's best friend?") "Man's best friend is his dog. " # Calling multiple prompts watsonx_llm.generate( [ "The fastest dog in the world?", "Describe your chosen dog breed", ] ) LLMResult(generations=[[Generation(text='The fastest dog in the world is the greyhound, which can run up to 45 miles per hour. This is about the same speed as a human running down a track. Greyhounds are very fast because they have long legs, a streamlined body, and a strong tail. They can run this fast for short distances, but they can also run for long distances, like a marathon. ', generation_info={'finish_reason': 'eos_token'})], [Generation(text='The Beagle is a scent hound, meaning it is bred to hunt by following a trail of scents.', generation_info={'finish_reason': 'eos_token'})]], llm_output={'token_usage': {'generated_token_count': 106, 'input_token_count': 13}, 'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': ''}, run=[RunInfo(run_id=UUID('52cb421d-b63f-4c5f-9b04-d4770c664725')), RunInfo(run_id=UUID('df2ea606-1622-4ed7-8d5d-8f6e068b71c4'))]) Streaming the Model output​ You can stream the model output. for chunk in watsonx_llm.stream( "Describe your favorite breed of dog and why it is your favorite." ): print(chunk, end="") My favorite breed of dog is a Labrador Retriever. Labradors are my favorite because they are extremely smart, very friendly, and love to be with people. They are also very playful and love to run around and have a lot of energy.
https://python.langchain.com/docs/integrations/llms/promptlayer_openai/
## PromptLayer OpenAI `PromptLayer` is the first platform that allows you to track, manage, and share your GPT prompt engineering. `PromptLayer` acts a middleware between your code and `OpenAI’s` python library. `PromptLayer` records all your `OpenAI API` requests, allowing you to search and explore request history in the `PromptLayer` dashboard. This example showcases how to connect to [PromptLayer](https://www.promptlayer.com/) to start recording your OpenAI requests. Another example is [here](https://python.langchain.com/docs/integrations/providers/promptlayer/). ## Install PromptLayer[​](#install-promptlayer "Direct link to Install PromptLayer") The `promptlayer` package is required to use PromptLayer with OpenAI. Install `promptlayer` using pip. ``` %pip install --upgrade --quiet promptlayer ``` ## Imports[​](#imports "Direct link to Imports") ``` import osimport promptlayerfrom langchain_community.llms import PromptLayerOpenAI ``` ## Set the Environment API Key[​](#set-the-environment-api-key "Direct link to Set the Environment API Key") You can create a PromptLayer API Key at [www.promptlayer.com](https://www.promptlayer.com/) by clicking the settings cog in the navbar. Set it as an environment variable called `PROMPTLAYER_API_KEY`. You also need an OpenAI Key, called `OPENAI_API_KEY`. ``` from getpass import getpassPROMPTLAYER_API_KEY = getpass() ``` ``` os.environ["PROMPTLAYER_API_KEY"] = PROMPTLAYER_API_KEY ``` ``` from getpass import getpassOPENAI_API_KEY = getpass() ``` ``` os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY ``` ## Use the PromptLayerOpenAI LLM like normal[​](#use-the-promptlayeropenai-llm-like-normal "Direct link to Use the PromptLayerOpenAI LLM like normal") _You can optionally pass in `pl_tags` to track your requests with PromptLayer’s tagging feature._ ``` llm = PromptLayerOpenAI(pl_tags=["langchain"])llm("I am a cat and I want") ``` **The above request should now appear on your [PromptLayer dashboard](https://www.promptlayer.com/).** ## Using PromptLayer Track[​](#using-promptlayer-track "Direct link to Using PromptLayer Track") If you would like to use any of the [PromptLayer tracking features](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9), you need to pass the argument `return_pl_id` when instantiating the PromptLayer LLM to get the request id. ``` llm = PromptLayerOpenAI(return_pl_id=True)llm_results = llm.generate(["Tell me a joke"])for res in llm_results.generations: pl_request_id = res[0].generation_info["pl_request_id"] promptlayer.track.score(request_id=pl_request_id, score=100) ``` Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well. Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard.
null
{ "depth": 1, "httpStatusCode": 200, "loadedTime": "2024-04-22T02:40:17.229Z", "loadedUrl": "https://python.langchain.com/docs/integrations/llms/promptlayer_openai/", "referrerUrl": "https://python.langchain.com/sitemap.xml" }
{ "author": null, "canonicalUrl": "https://python.langchain.com/docs/integrations/llms/promptlayer_openai/", "description": "PromptLayer is the first platform that allows you to track, manage,", "headers": { ":status": 200, "accept-ranges": null, "access-control-allow-origin": "*", "age": "3500", "cache-control": "public, max-age=0, must-revalidate", "content-disposition": "inline; filename=\"promptlayer_openai\"", "content-length": null, "content-type": "text/html; charset=utf-8", "date": "Mon, 22 Apr 2024 02:40:16 GMT", "etag": "W/\"2d925988aca7ed732a3937f4938529ee\"", "server": "Vercel", "strict-transport-security": "max-age=63072000", "x-vercel-cache": "HIT", "x-vercel-id": "cle1::g2tfq-1713753616745-c96bc0df0f39" }, "jsonLd": null, "keywords": null, "languageCode": "en", "openGraph": [ { "content": "https://python.langchain.com/img/brand/theme-image.png", "property": "og:image" }, { "content": "https://python.langchain.com/docs/integrations/llms/promptlayer_openai/", "property": "og:url" }, { "content": "PromptLayer OpenAI | 🦜️🔗 LangChain", "property": "og:title" }, { "content": "PromptLayer is the first platform that allows you to track, manage,", "property": "og:description" } ], "title": "PromptLayer OpenAI | 🦜️🔗 LangChain" }
PromptLayer OpenAI PromptLayer is the first platform that allows you to track, manage, and share your GPT prompt engineering. PromptLayer acts a middleware between your code and OpenAI’s python library. PromptLayer records all your OpenAI API requests, allowing you to search and explore request history in the PromptLayer dashboard. This example showcases how to connect to PromptLayer to start recording your OpenAI requests. Another example is here. Install PromptLayer​ The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip. %pip install --upgrade --quiet promptlayer Imports​ import os import promptlayer from langchain_community.llms import PromptLayerOpenAI Set the Environment API Key​ You can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar. Set it as an environment variable called PROMPTLAYER_API_KEY. You also need an OpenAI Key, called OPENAI_API_KEY. from getpass import getpass PROMPTLAYER_API_KEY = getpass() os.environ["PROMPTLAYER_API_KEY"] = PROMPTLAYER_API_KEY from getpass import getpass OPENAI_API_KEY = getpass() os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY Use the PromptLayerOpenAI LLM like normal​ You can optionally pass in pl_tags to track your requests with PromptLayer’s tagging feature. llm = PromptLayerOpenAI(pl_tags=["langchain"]) llm("I am a cat and I want") The above request should now appear on your PromptLayer dashboard. Using PromptLayer Track​ If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantiating the PromptLayer LLM to get the request id. llm = PromptLayerOpenAI(return_pl_id=True) llm_results = llm.generate(["Tell me a joke"]) for res in llm_results.generations: pl_request_id = res[0].generation_info["pl_request_id"] promptlayer.track.score(request_id=pl_request_id, score=100) Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well. Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard.