id
stringlengths 14
15
| text
stringlengths 23
2.21k
| source
stringlengths 52
97
|
---|---|---|
a4cb4780b09c-2 | Redis Readme fileInstalling​pip install redisWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Example​from langchain.embeddings import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores.redis import Redisfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()If you're not interested in the keys of your entries you can also create your redis instance from the documents.rds = Redis.from_documents( docs, embeddings, redis_url="redis://localhost:6379", index_name="link")If you're interested in the keys of your entries you have to split your docs in texts and metadatastexts = [d.page_content for d in docs]metadatas = [d.metadata for d in docs]rds, keys = Redis.from_texts_return_keys( texts, embeddings, redis_url="redis://localhost:6379", index_name="link")rds.index_namequery = "What did the president say about Ketanji Brown Jackson"results = rds.similarity_search(query)print(results[0].page_content)print(rds.add_texts(["Ankush went to Princeton"]))query = "Princeton"results = rds.similarity_search(query)print(results[0].page_content)# Load from existing indexrds = Redis.from_existing_index( embeddings, redis_url="redis://localhost:6379", index_name="link")query = "What did the president say about Ketanji Brown Jackson"results | https://python.langchain.com/docs/integrations/vectorstores/redis |
a4cb4780b09c-3 | = "What did the president say about Ketanji Brown Jackson"results = rds.similarity_search(query)print(results[0].page_content)Redis as Retriever​Here we go over different options for using the vector store as a retriever.There are three different search methods we can use to do retrieval. By default, it will use semantic similarity.retriever = rds.as_retriever()docs = retriever.get_relevant_documents(query)We can also use similarity_limit as a search method. This is only return documents if they are similar enoughretriever = rds.as_retriever(search_type="similarity_limit")# Here we can see it doesn't return any results because there are no relevant documentsretriever.get_relevant_documents("where did ankush go to college?")Delete keysTo delete your entries you have to address them by their keys.Redis.delete(keys, redis_url="redis://localhost:6379")Redis connection Url examples​Valid Redis Url scheme are:redis:// - Connection to Redis standalone, unencryptedrediss:// - Connection to Redis standalone, with TLS encryptionredis+sentinel:// - Connection to Redis server via Redis Sentinel, unencryptedrediss+sentinel:// - Connection to Redis server via Redis Sentinel, booth connections with TLS encryptionMore information about additional connection parameter can be found in the redis-py documentation at https://redis-py.readthedocs.io/en/stable/connections.html# connection to redis standalone at localhost, db 0, no passwordredis_url = "redis://localhost:6379"# connection to host "redis" port 7379 with db 2 and password "secret" (old style authentication scheme without username / pre 6.x)redis_url = "redis://:secret@redis:7379/2"# connection to host redis on default port with user "joe", pass "secret" using redis version 6+ | https://python.langchain.com/docs/integrations/vectorstores/redis |
a4cb4780b09c-4 | redis on default port with user "joe", pass "secret" using redis version 6+ ACLsredis_url = "redis://joe:secret@redis/0"# connection to sentinel at localhost with default group mymaster and db 0, no passwordredis_url = "redis+sentinel://localhost:26379"# connection to sentinel at host redis with default port 26379 and user "joe" with password "secret" with default group mymaster and db 0redis_url = "redis+sentinel://joe:secret@redis"# connection to sentinel, no auth with sentinel monitoring group "zone-1" and database 2redis_url = "redis+sentinel://redis:26379/zone-1/2"# connection to redis standalone at localhost, db 0, no password but with TLS supportredis_url = "rediss://localhost:6379"# connection to redis sentinel at localhost and default port, db 0, no password# but with TLS support for booth Sentinel and Redis serverredis_url = "rediss+sentinel://localhost"PreviousQdrantNextRocksetInstallingExampleRedis as RetrieverRedis connection Url examplesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/vectorstores/redis |
351c32d9c132-0 | DocArrayHnswSearch | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/vectorstores/docarray_hnsw |
351c32d9c132-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cognitive SearchCassandraChromaClarifaiClickHouse Vector SearchActiveloop's Deep LakeDocArrayHnswSearchDocArrayInMemorySearchElasticSearchFAISSHologresLanceDBMarqoMatchingEngineMilvusMongoDB AtlasMyScaleOpenSearchpg_embeddingPGVectorPineconeQdrantRedisRocksetSingleStoreDBscikit-learnStarRocksSupabase (Postgres)TairTigrisTypesenseVectaraWeaviateZillizGrouped by providerIntegrationsVector storesDocArrayHnswSearchOn this pageDocArrayHnswSearchDocArrayHnswSearch is a lightweight Document Index implementation provided by Docarray that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite.This notebook shows how to use functionality related to the DocArrayHnswSearch.Setup​Uncomment the below cells to install docarray and get/set your OpenAI api key if you haven't already done so.# !pip install "docarray[hnswlib]"# Get an OpenAI token: https://platform.openai.com/account/api-keys# import os# from getpass import getpass# OPENAI_API_KEY = getpass()# os.environ["OPENAI_API_KEY"] = OPENAI_API_KEYUsing DocArrayHnswSearch​from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import | https://python.langchain.com/docs/integrations/vectorstores/docarray_hnsw |
351c32d9c132-2 | langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import DocArrayHnswSearchfrom langchain.document_loaders import TextLoaderdocuments = TextLoader("../../../state_of_the_union.txt").load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()db = DocArrayHnswSearch.from_documents( docs, embeddings, work_dir="hnswlib_store/", n_dim=1536)Similarity search​query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity search with score​The returned distance score is cosine distance. Therefore, a lower score is better.docs = db.similarity_search_with_score(query)docs[0] | https://python.langchain.com/docs/integrations/vectorstores/docarray_hnsw |
351c32d9c132-3 | score is better.docs = db.similarity_search_with_score(query)docs[0] (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={}), 0.36962226)import shutil# delete the dirshutil.rmtree("hnswlib_store")PreviousActiveloop's Deep LakeNextDocArrayInMemorySearchSetupUsing DocArrayHnswSearchSimilarity searchSimilarity search with scoreCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/vectorstores/docarray_hnsw |
b5d6ec64a74d-0 | Hologres | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/vectorstores/hologres |
b5d6ec64a74d-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cognitive SearchCassandraChromaClarifaiClickHouse Vector SearchActiveloop's Deep LakeDocArrayHnswSearchDocArrayInMemorySearchElasticSearchFAISSHologresLanceDBMarqoMatchingEngineMilvusMongoDB AtlasMyScaleOpenSearchpg_embeddingPGVectorPineconeQdrantRedisRocksetSingleStoreDBscikit-learnStarRocksSupabase (Postgres)TairTigrisTypesenseVectaraWeaviateZillizGrouped by providerIntegrationsVector storesHologresHologresHologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time.
Hologres supports standard SQL syntax, is compatible with PostgreSQL, and supports most PostgreSQL functions. Hologres supports online analytical processing (OLAP) and ad hoc analysis for up to petabytes of data, and provides high-concurrency and low-latency online data services. Hologres provides vector database functionality by adopting Proxima. | https://python.langchain.com/docs/integrations/vectorstores/hologres |
b5d6ec64a74d-2 | Proxima is a high-performance software library developed by Alibaba DAMO Academy. It allows you to search for the nearest neighbors of vectors. Proxima provides higher stability and performance than similar open source software such as Faiss. Proxima allows you to search for similar text or image embeddings with high throughput and low latency. Hologres is deeply integrated with Proxima to provide a high-performance vector search service.This notebook shows how to use functionality related to the Hologres Proxima vector database. | https://python.langchain.com/docs/integrations/vectorstores/hologres |
b5d6ec64a74d-3 | Click here to fast deploy a Hologres cloud instance.#!pip install psycopg2from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import HologresSplit documents and get embeddings by call OpenAI APIfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()Connect to Hologres by setting related ENVIRONMENTS.export PG_HOST={host}export PG_PORT={port} # Optional, default is 80export PG_DATABASE={db_name} # Optional, default is postgresexport PG_USER={username}export PG_PASSWORD={password}Then store your embeddings and documents into Hologresimport osconnection_string = Hologres.connection_string_from_db_params( host=os.environ.get("PGHOST", "localhost"), port=int(os.environ.get("PGPORT", "80")), database=os.environ.get("PGDATABASE", "postgres"), user=os.environ.get("PGUSER", "postgres"), password=os.environ.get("PGPASSWORD", "postgres"),)vector_db = Hologres.from_documents( docs, embeddings, connection_string=connection_string, table_name="langchain_example_embeddings",)Query and retrieve dataquery = "What did the president say about Ketanji Brown Jackson"docs = vector_db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding | https://python.langchain.com/docs/integrations/vectorstores/hologres |
b5d6ec64a74d-4 | while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.PreviousFAISSNextLanceDBCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/vectorstores/hologres |
27cac980e0b8-0 | Marqo | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/vectorstores/marqo |
27cac980e0b8-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cognitive SearchCassandraChromaClarifaiClickHouse Vector SearchActiveloop's Deep LakeDocArrayHnswSearchDocArrayInMemorySearchElasticSearchFAISSHologresLanceDBMarqoMatchingEngineMilvusMongoDB AtlasMyScaleOpenSearchpg_embeddingPGVectorPineconeQdrantRedisRocksetSingleStoreDBscikit-learnStarRocksSupabase (Postgres)TairTigrisTypesenseVectaraWeaviateZillizGrouped by providerIntegrationsVector storesMarqoOn this pageMarqoThis notebook shows how to use functionality related to the Marqo vectorstore.Marqo is an open-source vector search engine. Marqo allows you to store and query multimodal data such as text and images. Marqo creates the vectors for you using a huge selection of opensource models, you can also provide your own finetuned models and Marqo will handle the loading and inference for you.To run this notebook with our docker image please run the following commands first to get Marqo:docker pull marqoai/marqo:latestdocker rm -f marqodocker run --name marqo -it --privileged -p 8882:8882 --add-host host.docker.internal:host-gateway marqoai/marqo:latestpip install marqofrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Marqofrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import | https://python.langchain.com/docs/integrations/vectorstores/marqo |
27cac980e0b8-2 | import Marqofrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)import marqo# initialize marqomarqo_url = "http://localhost:8882" # if using marqo cloud replace with your endpoint (console.marqo.ai)marqo_api_key = "" # if using marqo cloud replace with your api key (console.marqo.ai)client = marqo.Client(url=marqo_url, api_key=marqo_api_key)index_name = "langchain-demo"docsearch = Marqo.from_documents(docs, index_name=index_name)query = "What did the president say about Ketanji Brown Jackson"result_docs = docsearch.similarity_search(query) Index langchain-demo exists.print(result_docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One | https://python.langchain.com/docs/integrations/vectorstores/marqo |
27cac980e0b8-3 | 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.result_docs = docsearch.similarity_search_with_score(query)print(result_docs[0][0].page_content, result_docs[0][1], sep="\n") Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. 0.68647254Additional features​One of the powerful features of Marqo as a vectorstore is that you can use indexes created externally. For example:If you had a database of image and text pairs from another application, you can simply just use it in langchain with the Marqo vectorstore. Note that bringing your own multimodal indexes will disable the add_texts method.If you had a database of text documents, you can bring it into the langchain framework and add more texts through add_texts.The documents that are returned are customised by passing your | https://python.langchain.com/docs/integrations/vectorstores/marqo |
27cac980e0b8-4 | framework and add more texts through add_texts.The documents that are returned are customised by passing your own function to the page_content_builder callback in the search methods.Multimodal Example​# use a new indexindex_name = "langchain-multimodal-demo"# incase the demo is re-runtry: client.delete_index(index_name)except Exception: print(f"Creating {index_name}")# This index could have been created by another systemsettings = {"treat_urls_and_pointers_as_images": True, "model": "ViT-L/14"}client.create_index(index_name, **settings)client.index(index_name).add_documents( [ # image of a bus { "caption": "Bus", "image": "https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image4.jpg", }, # image of a plane { "caption": "Plane", "image": "https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image2.jpg", }, ],) {'errors': False, 'processingTimeMs': 2090.2822139996715, 'index_name': 'langchain-multimodal-demo', 'items': [{'_id': 'aa92fc1c-1fb2-4d86-b027-feb507c419f7', | https://python.langchain.com/docs/integrations/vectorstores/marqo |
27cac980e0b8-5 | 'result': 'created', 'status': 201}, {'_id': '5142c258-ef9f-4bf2-a1a6-2307280173a0', 'result': 'created', 'status': 201}]}def get_content(res): """Helper to format Marqo's documents into text to be used as page_content""" return f"{res['caption']}: {res['image']}"docsearch = Marqo(client, index_name, page_content_builder=get_content)query = "vehicles that fly"doc_results = docsearch.similarity_search(query)for doc in doc_results: print(doc.page_content) Plane: https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image2.jpg Bus: https://raw.githubusercontent.com/marqo-ai/marqo/mainline/examples/ImageSearchGuide/data/image4.jpgText only example​# use a new indexindex_name = "langchain-byo-index-demo"# incase the demo is re-runtry: client.delete_index(index_name)except Exception: print(f"Creating {index_name}")# This index could have been created by another systemclient.create_index(index_name)client.index(index_name).add_documents( [ { "Title": "Smartphone", "Description": "A smartphone is a portable computer device that combines mobile telephone " "functions and computing functions into one unit.", }, | https://python.langchain.com/docs/integrations/vectorstores/marqo |
27cac980e0b8-6 | "functions and computing functions into one unit.", }, { "Title": "Telephone", "Description": "A telephone is a telecommunications device that permits two or more users to" "conduct a conversation when they are too far apart to be easily heard directly.", }, ],) {'errors': False, 'processingTimeMs': 139.2144540004665, 'index_name': 'langchain-byo-index-demo', 'items': [{'_id': '27c05a1c-b8a9-49a5-ae73-fbf1eb51dc3f', 'result': 'created', 'status': 201}, {'_id': '6889afe0-e600-43c1-aa3b-1d91bf6db274', 'result': 'created', 'status': 201}]}# Note text indexes retain the ability to use add_texts despite different field names in documents# this is because the page_content_builder callback lets you handle these document fields as requireddef get_content(res): """Helper to format Marqo's documents into text to be used as page_content""" if "text" in res: return res["text"] return res["Description"]docsearch = Marqo(client, index_name, page_content_builder=get_content)docsearch.add_texts(["This is a document that is about elephants"]) | https://python.langchain.com/docs/integrations/vectorstores/marqo |
27cac980e0b8-7 | is a document that is about elephants"]) ['9986cc72-adcd-4080-9d74-265c173a9ec3']query = "modern communications devices"doc_results = docsearch.similarity_search(query)print(doc_results[0].page_content) A smartphone is a portable computer device that combines mobile telephone functions and computing functions into one unit.query = "elephants"doc_results = docsearch.similarity_search(query, page_content_builder=get_content)print(doc_results[0].page_content) This is a document that is about elephantsWeighted Queries​We also expose marqos weighted queries which are a powerful way to compose complex semantic searches.query = {"communications devices": 1.0}doc_results = docsearch.similarity_search(query)print(doc_results[0].page_content) A smartphone is a portable computer device that combines mobile telephone functions and computing functions into one unit.query = {"communications devices": 1.0, "technology post 2000": -1.0}doc_results = docsearch.similarity_search(query)print(doc_results[0].page_content) A telephone is a telecommunications device that permits two or more users toconduct a conversation when they are too far apart to be easily heard directly.Question Answering with SourcesThis section shows how to use Marqo as part of a RetrievalQAWithSourcesChain. Marqo will perform the searches for information in the sources.from langchain.chains import RetrievalQAWithSourcesChainfrom langchain import OpenAIimport osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") OpenAI API Key:········with open("../../../state_of_the_union.txt") as f: state_of_the_union = | https://python.langchain.com/docs/integrations/vectorstores/marqo |
27cac980e0b8-8 | open("../../../state_of_the_union.txt") as f: state_of_the_union = f.read()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)index_name = "langchain-qa-with-retrieval"docsearch = Marqo.from_documents(docs, index_name=index_name) Index langchain-qa-with-retrieval exists.chain = RetrievalQAWithSourcesChain.from_chain_type( OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever())chain( {"question": "What did the president say about Justice Breyer"}, return_only_outputs=True,) {'answer': ' The president honored Justice Breyer, thanking him for his service and noting that he is a retiring Justice of the United States Supreme Court.\n', 'sources': '../../../state_of_the_union.txt'}PreviousLanceDBNextMatchingEngineAdditional featuresWeighted QueriesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/vectorstores/marqo |
19cbf1cf0411-0 | MyScale | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/vectorstores/myscale |
19cbf1cf0411-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cognitive SearchCassandraChromaClarifaiClickHouse Vector SearchActiveloop's Deep LakeDocArrayHnswSearchDocArrayInMemorySearchElasticSearchFAISSHologresLanceDBMarqoMatchingEngineMilvusMongoDB AtlasMyScaleOpenSearchpg_embeddingPGVectorPineconeQdrantRedisRocksetSingleStoreDBscikit-learnStarRocksSupabase (Postgres)TairTigrisTypesenseVectaraWeaviateZillizGrouped by providerIntegrationsVector storesMyScaleOn this pageMyScaleMyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse. This notebook shows how to use functionality related to the MyScale vector database.Setting up envrionments​pip install clickhouse-connectWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")There are two ways to set up parameters for myscale index.Environment VariablesBefore you run the app, please set the environment variable with export: | https://python.langchain.com/docs/integrations/vectorstores/myscale |
19cbf1cf0411-2 | export MYSCALE_HOST='<your-endpoints-url>' MYSCALE_PORT=<your-endpoints-port> MYSCALE_USERNAME=<your-username> MYSCALE_PASSWORD=<your-password> ...You can easily find your account, password and other info on our SaaS. For details please refer to this documentEvery attributes under MyScaleSettings can be set with prefix MYSCALE_ and is case insensitive.Create MyScaleSettings object with parameters```pythonfrom langchain.vectorstores import MyScale, MyScaleSettingsconfig = MyScaleSetting(host="<your-backend-url>", port=8443, ...)index = MyScale(embedding_function, config)index.add_documents(...)```from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import MyScalefrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()for d in docs: d.metadata = {"some": "metadata"}docsearch = MyScale.from_documents(docs, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)print(docs[0].page_content)Get connection info and data schema​print(str(docsearch))Filtering​You can have direct access to myscale SQL where statement. You can write WHERE clause following standard SQL.NOTE: Please be aware of SQL injection, this interface must not be directly called by end-user.If you custimized your column_map under your setting, you search with filter like this:from langchain.vectorstores import MyScale, MyScaleSettingsfrom langchain.document_loaders import TextLoaderloader = | https://python.langchain.com/docs/integrations/vectorstores/myscale |
19cbf1cf0411-3 | import MyScale, MyScaleSettingsfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()for i, d in enumerate(docs): d.metadata = {"doc_id": i}docsearch = MyScale.from_documents(docs, embeddings)Similarity search with score​The returned distance score is cosine distance. Therefore, a lower score is better.meta = docsearch.metadata_columnoutput = docsearch.similarity_search_with_relevance_scores( "What did the president say about Ketanji Brown Jackson?", k=4, where_str=f"{meta}.doc_id<10",)for d, dist in output: print(dist, d.metadata, d.page_content[:20] + "...")Deleting your data​docsearch.drop()PreviousMongoDB AtlasNextOpenSearchSetting up envrionmentsGet connection info and data schemaFilteringSimilarity search with scoreDeleting your dataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/vectorstores/myscale |
b9e55ba552b7-0 | Rockset | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/vectorstores/rockset |
b9e55ba552b7-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cognitive SearchCassandraChromaClarifaiClickHouse Vector SearchActiveloop's Deep LakeDocArrayHnswSearchDocArrayInMemorySearchElasticSearchFAISSHologresLanceDBMarqoMatchingEngineMilvusMongoDB AtlasMyScaleOpenSearchpg_embeddingPGVectorPineconeQdrantRedisRocksetSingleStoreDBscikit-learnStarRocksSupabase (Postgres)TairTigrisTypesenseVectaraWeaviateZillizGrouped by providerIntegrationsVector storesRocksetOn this pageRocksetRockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index™ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters. This notebook demonstrates how to use Rockset as a vectorstore in langchain. To get started, make sure you have a Rockset account and an API key available.Setting up environment​Make sure you have Rockset account and go to the web console to get the API key. Details can be found on the website. For the purpose of this notebook, we will assume you're using Rockset from Oregon(us-west-2).Now you will need to create a Rockset collection to write to, use the Rockset web console to do this. For the purpose of this exercise, we will create a collection called langchain_demo. Since Rockset supports schemaless | https://python.langchain.com/docs/integrations/vectorstores/rockset |
b9e55ba552b7-2 | of this exercise, we will create a collection called langchain_demo. Since Rockset supports schemaless ingest, you don't need to inform us of the shape of metadata for your texts. However, you do need to decide on two columns upfront:Where to store the text. We will use the column description for this.Where to store the vector-embedding for the text. We will use the column description_embedding for this.Also you will need to inform Rockset that description_embedding is a vector-embedding, so that we can optimize its format. You can do this using a Rockset ingest transformation while creating your collection:SELECT | https://python.langchain.com/docs/integrations/vectorstores/rockset |
b9e55ba552b7-3 | _input.* EXCEPT(_meta),
VECTOR_ENFORCE(_input.description_embedding, #length_of_vector_embedding, 'float') as description_embedding
FROM | https://python.langchain.com/docs/integrations/vectorstores/rockset |
b9e55ba552b7-4 | _input// We used OpenAI text-embedding-ada-002 for this examples, where #length_of_vector_embedding = 1536Now let's install the rockset-python-client. This is used by langchain to talk to the Rockset database.pip install rocksetThis is it! Now you're ready to start writing some python code to store vector embeddings in Rockset, and querying the database to find texts similar to your query! We support 3 distance functions: COSINE_SIM, EUCLIDEAN_DIST and DOT_PRODUCT.Example​import osimport rockset# Make sure env variable ROCKSET_API_KEY is setROCKSET_API_KEY = os.environ.get("ROCKSET_API_KEY")ROCKSET_API_SERVER = ( rockset.Regions.usw2a1) # Make sure this points to the correct Rockset regionrockset_client = rockset.RocksetClient(ROCKSET_API_SERVER, ROCKSET_API_KEY)COLLECTION_NAME = "langchain_demo"TEXT_KEY = "description"EMBEDDING_KEY = "description_embedding"Now let's use this client to create a Rockset Langchain Vectorstore!1. Inserting texts​from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.document_loaders import TextLoaderfrom langchain.vectorstores.rocksetdb import RocksetDBloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)Now we have the documents we want to insert. Let's create a Rockset vectorstore and insert these docs into the Rockset collection. We will use OpenAIEmbeddings to create embeddings for the texts, but you're free to use whatever you want.# Make sure the environment variable | https://python.langchain.com/docs/integrations/vectorstores/rockset |
b9e55ba552b7-5 | embeddings for the texts, but you're free to use whatever you want.# Make sure the environment variable OPENAI_API_KEY is set upembeddings = OpenAIEmbeddings()docsearch = RocksetDB( client=rockset_client, embeddings=embeddings, collection_name=COLLECTION_NAME, text_key=TEXT_KEY, embedding_key=EMBEDDING_KEY,)ids = docsearch.add_texts( texts=[d.page_content for d in docs], metadatas=[d.metadata for d in docs],)## If you go to the Rockset console now, you should be able to see this docs along with the metadata `source`2. Searching similar texts​Now let's try to search Rockset to find strings similar to our query string!query = "What did the president say about Ketanji Brown Jackson"output = docsearch.similarity_search_with_relevance_scores( query, 4, RocksetDB.DistanceFunction.COSINE_SIM)print("output length:", len(output))for d, dist in output: print(dist, d.metadata, d.page_content[:20] + "...")### output length: 4# 0.764990692109871 {'source': '../../../state_of_the_union.txt'} Madam Speaker, Madam...# 0.7485416901622112 {'source': '../../../state_of_the_union.txt'} And I’m taking robus...# 0.7468678973398306 {'source': '../../../state_of_the_union.txt'} And so many families...# 0.7436231261419488 {'source': '../../../state_of_the_union.txt'} Groups of citizens b...You can also use a where filter to prune your search space. You can add filters on text key, or any of the metadata fields. Note: Since Rockset stores | https://python.langchain.com/docs/integrations/vectorstores/rockset |
b9e55ba552b7-6 | can add filters on text key, or any of the metadata fields. Note: Since Rockset stores each metadata field as a separate column internally, these filters are much faster than other vector databases which store all metadata as a single JSON.For eg, to find all texts NOT containing the substring "and", you can use the following code:output = docsearch.similarity_search_with_relevance_scores( query, 4, RocksetDB.DistanceFunction.COSINE_SIM, where_str="{} NOT LIKE '%citizens%'".format(TEXT_KEY),)print("output length:", len(output))for d, dist in output: print(dist, d.metadata, d.page_content[:20] + "...")### output length: 4# 0.7651359650263554 {'source': '../../../state_of_the_union.txt'} Madam Speaker, Madam...# 0.7486265516824893 {'source': '../../../state_of_the_union.txt'} And I’m taking robus...# 0.7469625542348115 {'source': '../../../state_of_the_union.txt'} And so many families...# 0.7344177777547739 {'source': '../../../state_of_the_union.txt'} We see the unity amo...3. [Optional] Drop all inserted documents​In order to delete texts from the Rockset collection, you need to know the unique ID associated with each document inside Rockset. These ids can either be supplied directly by the user while inserting the texts (in the RocksetDB.add_texts() function), else Rockset will generate a unique ID or each document. Either way, Rockset.add_texts() returns the ids for the inserted documents.To delete these docs, simply use the RocksetDB.delete_texts() function.docsearch.delete_texts(ids)Congratulations!​Voila! In this example you successfuly created a Rockset | https://python.langchain.com/docs/integrations/vectorstores/rockset |
b9e55ba552b7-7 | In this example you successfuly created a Rockset collection, inserted documents along with their OpenAI vector embeddings, and searched for similar docs both with and without any metadata filters.Keep an eye on https://rockset.com/blog/introducing-vector-search-on-rockset/ for future updates in this space!PreviousRedisNextSingleStoreDBSetting up environmentExample1. Inserting texts2. Searching similar texts3. Optional Drop all inserted documentsCongratulations!CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/vectorstores/rockset |
fc4eb3d1e28b-0 | Cassandra | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/vectorstores/cassandra |
fc4eb3d1e28b-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesAlibaba Cloud OpenSearchAnalyticDBAnnoyAtlasAwaDBAzure Cognitive SearchCassandraChromaClarifaiClickHouse Vector SearchActiveloop's Deep LakeDocArrayHnswSearchDocArrayInMemorySearchElasticSearchFAISSHologresLanceDBMarqoMatchingEngineMilvusMongoDB AtlasMyScaleOpenSearchpg_embeddingPGVectorPineconeQdrantRedisRocksetSingleStoreDBscikit-learnStarRocksSupabase (Postgres)TairTigrisTypesenseVectaraWeaviateZillizGrouped by providerIntegrationsVector storesCassandraOn this pageCassandraApache Cassandra® is a NoSQL, row-oriented, highly scalable and highly available database.Newest Cassandra releases natively support Vector Similarity Search.To run this notebook you need either a running Cassandra cluster equipped with Vector Search capabilities (in pre-release at the time of writing) or a DataStax Astra DB instance running in the cloud (you can get one for free at datastax.com). Check cassio.org for more information.pip install "cassio>=0.0.7"Please provide database connection parameters and secrets:​import osimport getpassdatabase_mode = (input("\n(C)assandra or (A)stra DB? ")).upper()keyspace_name = input("\nKeyspace name? ")if database_mode == "A": ASTRA_DB_APPLICATION_TOKEN = getpass.getpass('\nAstra DB Token ("AstraCS:...") ') # ASTRA_DB_SECURE_BUNDLE_PATH = input("Full path to | https://python.langchain.com/docs/integrations/vectorstores/cassandra |
fc4eb3d1e28b-2 | # ASTRA_DB_SECURE_BUNDLE_PATH = input("Full path to your Secure Connect Bundle? ")elif database_mode == "C": CASSANDRA_CONTACT_POINTS = input( "Contact points? (comma-separated, empty for localhost) " ).strip()depending on whether local or cloud-based Astra DB, create the corresponding database connection "Session" object​from cassandra.cluster import Clusterfrom cassandra.auth import PlainTextAuthProviderif database_mode == "C": if CASSANDRA_CONTACT_POINTS: cluster = Cluster( [cp.strip() for cp in CASSANDRA_CONTACT_POINTS.split(",") if cp.strip()] ) else: cluster = Cluster() session = cluster.connect()elif database_mode == "A": ASTRA_DB_CLIENT_ID = "token" cluster = Cluster( cloud={ "secure_connect_bundle": ASTRA_DB_SECURE_BUNDLE_PATH, }, auth_provider=PlainTextAuthProvider( ASTRA_DB_CLIENT_ID, ASTRA_DB_APPLICATION_TOKEN, ), ) session = cluster.connect()else: raise NotImplementedErrorPlease provide OpenAI access key​We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")Creation and usage of the Vector | https://python.langchain.com/docs/integrations/vectorstores/cassandra |
fc4eb3d1e28b-3 | = getpass.getpass("OpenAI API Key:")Creation and usage of the Vector Store​from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Cassandrafrom langchain.document_loaders import TextLoaderfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)docs = text_splitter.split_documents(documents)embedding_function = OpenAIEmbeddings()table_name = "my_vector_db_table"docsearch = Cassandra.from_documents( documents=docs, embedding=embedding_function, session=session, keyspace=keyspace_name, table_name=table_name,)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query)## if you already have an index, you can load it and use it like this:# docsearch_preexisting = Cassandra(# embedding=embedding_function,# session=session,# keyspace=keyspace_name,# table_name=table_name,# )# docsearch_preexisting.similarity_search(query, k=2)print(docs[0].page_content)Maximal Marginal Relevance Searches​In addition to using similarity search in the retriever object, you can also use mmr as retriever.retriever = docsearch.as_retriever(search_type="mmr")matched_docs = retriever.get_relevant_documents(query)for i, d in enumerate(matched_docs): print(f"\n## Document {i}\n") print(d.page_content)Or use max_marginal_relevance_search directly:found_docs = | https://python.langchain.com/docs/integrations/vectorstores/cassandra |
fc4eb3d1e28b-4 | print(d.page_content)Or use max_marginal_relevance_search directly:found_docs = docsearch.max_marginal_relevance_search(query, k=2, fetch_k=10)for i, doc in enumerate(found_docs): print(f"{i + 1}.", doc.page_content, "\n")PreviousAzure Cognitive SearchNextChromaPlease provide database connection parameters and secrets:Please provide OpenAI access keyCreation and usage of the Vector StoreMaximal Marginal Relevance SearchesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/vectorstores/cassandra |
Subsets and Splits