text
stringlengths
25
143k
source
stringlengths
12
112
--- logos: - /img/customers-logo/discord.svg - /img/customers-logo/johnson-and-johnson.svg - /img/customers-logo/perplexity.svg - /img/customers-logo/mozilla.svg - /img/customers-logo/voiceflow.svg - /img/customers-logo/bosch-digital.svg sitemapExclude: true ---
customers/logo-cards-1.md
--- review: “We looked at all the big options out there right now for vector databases, with our focus on ease of use, performance, pricing, and communication. <strong>Qdrant came out on top in each category...</strong> ultimately, it wasn't much of a contest.” names: Alex Webb positions: Director of Engineering, CB Insights avatar: src: /img/customers/alex-webb.svg alt: Alex Webb Avatar logo: src: /img/brands/cb-insights.svg alt: Logo sitemapExclude: true ---
customers/customers-testimonial1.md
--- title: Customers description: Learn how Qdrant powers thousands of top AI solutions that require vector search with unparalleled efficiency, performance and massive-scale data processing. caseStudy: logo: src: /img/customers-case-studies/customer-logo.svg alt: Logo title: Recommendation Engine with Qdrant Vector Database description: Dailymotion leverages Qdrant to optimize its <b>video recommendation engine</b>, managing over 420 million videos and processing 13 million recommendations daily. With this, Dailymotion was able to <b>reduced content processing times from hours to minutes</b> and <b>increased user interactions and click-through rates by more than 3x.</b> link: text: Read Case Study url: /blog/case-study-dailymotion/ image: src: /img/customers-case-studies/case-study.png alt: Preview cases: - id: 0 logo: src: /img/customers-case-studies/visua.svg alt: Visua Logo image: src: /img/customers-case-studies/case-visua.png alt: The hands of a person in a medical gown holding a tablet against the background of a pharmacy shop title: VISUA improves quality control process for computer vision with anomaly detection by 10x. link: text: Read Story url: /blog/case-study-visua/ - id: 1 logo: src: /img/customers-case-studies/dust.svg alt: Dust Logo image: src: /img/customers-case-studies/case-dust.png alt: A man in a jeans shirt is holding a smartphone, only his hands are visible. In the foreground, there is an image of a robot surrounded by chat and sound waves. title: Dust uses Qdrant for RAG, achieving millisecond retrieval, reducing costs by 50%, and boosting scalability. link: text: Read Story url: /blog/dust-and-qdrant/ - id: 2 logo: src: /img/customers-case-studies/iris-agent.svg alt: Logo image: src: /img/customers-case-studies/case-iris-agent.png alt: Hands holding a smartphone, styled smartphone interface visualisation in the foreground. First-person view title: IrisAgent uses Qdrant for RAG to automate support, and improve resolution times, transforming customer service. link: text: Read Story url: /blog/iris-agent-qdrant/ sitemapExclude: true ---
customers/customers-case-studies.md
--- review: “We LOVE Qdrant! The exceptional engineering, strong business value, and outstanding team behind the product drove our choice. Thank you for your great contribution to the technology community!” names: Kyle Tobin positions: Principal, Cognizant avatar: src: /img/customers/kyle-tobin.png alt: Kyle Tobin Avatar logo: src: /img/brands/cognizant.svg alt: Cognizant Logo sitemapExclude: true ---
customers/customers-testimonial2.md
--- logos: - /img/customers-logo/gitbook.svg - /img/customers-logo/deloitte.svg - /img/customers-logo/disney.svg sitemapExclude: true ---
customers/logo-cards-3.md
--- title: Vector Space Wall link: url: https://testimonial.to/qdrant/all text: Submit Your Testimonial testimonials: - id: 0 name: Jonathan Eisenzopf position: Chief Strategy and Research Officer at Talkmap avatar: src: /img/customers/jonathan-eisenzopf.svg alt: Avatar text: “With Qdrant, we found the missing piece to develop our own provider independent multimodal generative AI platform on enterprise scale.” - id: 1 name: Angel Luis Almaraz Sánchez position: Full Stack | DevOps avatar: src: /img/customers/angel-luis-almaraz-sanchez.svg alt: Avatar text: Thank you, great work, Qdrant is my favorite option for similarity search. - id: 2 name: Shubham Krishna position: ML Engineer @ ML6 avatar: src: /img/customers/shubham-krishna.svg alt: Avatar text: Go ahead and checkout Qdrant. I plan to build a movie retrieval search where you can ask anything regarding a movie based on the vector embeddings generated by a LLM. It can also be used for getting recommendations. - id: 3 name: Kwok Hing LEON position: Data Science avatar: src: /img/customers/kwok-hing-leon.svg alt: Avatar text: Check out qdrant for improving searches. Bye to non-semantic KM engines. - id: 4 name: Ankur S position: Building avatar: src: /img/customers/ankur-s.svg alt: Avatar text: Quadrant is a great vector database. There is a real sense of thought behind the api! - id: 5 name: Yasin Salimibeni View Yasin Salimibeni’s profile position: AI Evangelist | Generative AI Product Designer | Entrepreneur | Mentor avatar: src: /img/customers/yasin-salimibeni-view-yasin-salimibeni.svg alt: Avatar text: Great work. I just started testing Qdrant Azure and I was impressed by the efficiency and speed. Being deploy-ready on large cloud providers is a great plus. Way to go! - id: 6 name: Marcel Coetzee position: Data and AI Plumber avatar: src: /img/customers/marcel-coetzee.svg alt: Avatar text: Using Qdrant as a blazing fact vector store for a stealth project of mine. It offers fantasic functionality for semantic search &#10024; - id: 7 name: Andrew Rove position: Principal Software Engineer avatar: src: /img/customers/andrew-rove.svg alt: Avatar text: We have been using Qdrant in production now for over 6 months to store vectors for cosine similarity search and it is way more stable and faster than our old ElasticSearch vector index.<br/><br/>No merging segments, no red indexes at random times. It just works and was super easy to deploy via docker to our cluster.<br/><br/>It’s faster, cheaper to host, and more stable, and open source to boot! - id: 8 name: Josh Lloyd position: ML Engineer avatar: src: /img/customers/josh-lloyd.svg alt: Avatar text: I'm using Qdrant to search through thousands of documents to find similar text phrases for question answering. Qdrant's awesome filtering allows me to slice along metadata while I'm at it! &#128640; and it's fast &#9193;&#128293; - id: 9 name: Leonard Püttmann position: data scientist avatar: src: /img/customers/leonard-puttmann.svg alt: Avatar text: Amidst the hype around vector databases, Qdrant is by far my favorite one. It's super fast (written in Rust) and open-source! At Kern AI we use Qdrant for fast document retrieval and to do quick similarity search for text data. - id: 10 name: Stanislas Polu position: Software Engineer & Co-Founder, Dust avatar: src: /img/customers/stanislas-polu.svg alt: Avatar text: Qdrant's the best. By. Far. - id: 11 name: Sivesh Sukumar position: Investor at Balderton avatar: src: /img/customers/sivesh-sukumar.svg alt: Avatar text: We're using Qdrant to help segment and source Europe's next wave of extraordinary companies! - id: 12 name: Saksham Gupta position: AI Governance Machine Learning Engineer avatar: src: /img/customers/saksham-gupta.svg alt: Avatar text: Looking forward to using Qdrant vector similarity search in the clinical trial space! OpenAI Embeddings + Qdrant = Match made in heaven! - id: 12 name: Rishav Dash position: Data Scientist avatar: src: /img/customers/rishav-dash.svg alt: Avatar text: awesome stuff &#128293; sitemapExclude: true ---
customers/customers-vector-space-wall.md
--- title: Customers description: Learn how Qdrant powers thousands of top AI solutions that require vector search with unparalleled efficiency, performance and massive-scale data processing. sitemapExclude: true ---
customers/customers-hero.md
--- title: Customers description: Customers build: render: always cascade: - build: list: local publishResources: false render: never ---
customers/_index.md
--- logos: - /img/customers-logo/flipkart.svg - /img/customers-logo/x.svg - /img/customers-logo/quora.svg sitemapExclude: true ---
customers/logo-cards-2.md
--- title: Qdrant Demos and Tutorials description: Experience firsthand how Qdrant powers intelligent search, anomaly detection, and personalized recommendations, showcasing the full capabilities of vector search to revolutionize data exploration and insights. cards: - id: 0 title: Semantic Search Demo - Startup Search paragraphs: - id: 0 content: This demo leverages a pre-trained SentenceTransformer model to perform semantic searches on startup descriptions, transforming them into vectors for the Qdrant engine. - id: 1 content: Enter a query to see how neural search compares to traditional full-text search, with the option to toggle neural search on and off for direct comparison. link: text: View Demo url: https://qdrant.to/semantic-search-demo - id: 1 title: Semantic Search and Recommendations Demo - Food Discovery paragraphs: - id: 0 content: Explore personalized meal recommendations with our demo, using Delivery Service data. Like or dislike dish photos to refine suggestions based on visual appeal. - id: 1 content: Filter options allow for restaurant selections within your delivery area, tailoring your dining experience to your preferences. link: text: View Demo url: https://food-discovery.qdrant.tech/ - id: 2 title: Categorization Demo -<br> E-Commerce Products paragraphs: - id: 0 content: Discover the power of vector databases in e-commerce through our demo. Simply input a product name and watch as our multi-language model intelligently categorizes it. The dots you see represent product clusters, highlighting our system's efficient categorization. link: text: View Demo url: https://qdrant.to/extreme-classification-demo - id: 3 title: Code Search Demo -<br> Explore Qdrant's Codebase paragraphs: - id: 0 content: Semantic search isn't just for natural language. By combining results from two models, qdrant is able to locate relevant code snippets down to the exact line. link: text: View Demo url: https://code-search.qdrant.tech/ ---
demo/_index.md
--- content: Learn more about all features that are supported on Qdrant Cloud. link: text: Qdrant Features url: /qdrant-vector-database/ sitemapExclude: true ---
qdrant-cloud/qdrant-cloud-features-link.md
--- title: Qdrant Cloud description: Qdrant Cloud provides optimal flexibility and offers a suite of features focused on efficient and scalable vector search - fully managed. Available on AWS, Google Cloud, and Azure. startFree: text: Start Free url: https://cloud.qdrant.io/ contactUs: text: Contact us url: /contact-us/ icon: src: /icons/fill/lightning-purple.svg alt: Lightning content: "Learn how to get up and running in minutes:" #video: # src: / # button: Watch Demo # icon: # src: /icons/outline/play-white.svg # alt: Play # preview: /img/qdrant-cloud-demo.png sitemapExclude: true ---
qdrant-cloud/qdrant-cloud-hero.md
--- items: - id: 0 title: Run Anywhere description: Available on <b>AWS</b>, <b>Google Cloud</b>, and <b>Azure</b> regions globally for deployment flexibility and quick data access. image: src: /img/qdrant-cloud-bento-cards/run-anywhere-graphic.png alt: Run anywhere graphic - id: 1 title: Simple Setup and Start Free description: Deploying a cluster via the Qdrant Cloud Console takes only a few seconds and scales up as needed. image: src: /img/qdrant-cloud-bento-cards/simple-setup-illustration.png alt: Simple setup illustration - id: 2 title: Efficient Resource Management description: Dramatically reduce memory usage with built-in compression options and offload data to disk. image: src: /img/qdrant-cloud-bento-cards/efficient-resource-management.png alt: Efficient resource management diagram - id: 3 title: Zero-downtime Upgrades description: Uninterrupted service during scaling and model updates for continuous operation and deployment flexibility. link: text: Cluster Scaling url: /documentation/cloud/cluster-scaling/ image: src: /img/qdrant-cloud-bento-cards/zero-downtime-upgrades.png alt: Zero downtime upgrades illustration - id: 4 title: Continuous Backups description: Automated, configurable backups for data safety and easy restoration to previous states. link: text: Backups url: /documentation/cloud/backups/ image: src: /img/qdrant-cloud-bento-cards/continuous-backups.png alt: Continuous backups illustration sitemapExclude: true ---
qdrant-cloud/qdrant-cloud-bento-cards.md
--- title: "Qdrant Cloud: Scalable Managed Cloud Services" url: cloud description: "Discover Qdrant Cloud, the cutting-edge managed cloud for scalable, high-performance AI applications. Manage and deploy your vector data with ease today." build: render: always cascade: - build: list: local publishResources: false render: never ---
qdrant-cloud/_index.md
--- logo: title: Our Logo description: "The Qdrant logo represents a paramount expression of our core brand identity. With consistent placement, sizing, clear space, and color usage, our logo affirms its recognition across all platforms." logoCards: - id: 0 logo: src: /img/brand-resources-logos/logo.svg alt: Logo Full Color title: Logo Full Color link: url: /img/brand-resources-logos/logo.svg text: Download - id: 1 logo: src: /img/brand-resources-logos/logo-black.svg alt: Logo Black title: Logo Black link: url: /img/brand-resources-logos/logo-black.svg text: Download - id: 2 logo: src: /img/brand-resources-logos/logo-white.svg alt: Logo White title: Logo White link: url: /img/brand-resources-logos/logo-white.svg text: Download logomarkTitle: Logomark logomarkCards: - id: 0 logo: src: /img/brand-resources-logos/logomark.svg alt: Logomark Full Color title: Logomark Full Color link: url: /img/brand-resources-logos/logomark.svg text: Download - id: 1 logo: src: /img/brand-resources-logos/logomark-black.svg alt: Logomark Black title: Logomark Black link: url: /img/brand-resources-logos/logomark-black.svg text: Download - id: 2 logo: src: /img/brand-resources-logos/logomark-white.svg alt: Logomark White title: Logomark White link: url: /img/brand-resources-logos/logomark-white.svg text: Download colors: title: Colors description: Our brand colors play a crucial role in maintaining a cohesive visual identity. The careful balance of these colors ensures a consistent and impactful representation of Qdrant, reinforcing our commitment to excellence and precision in every aspect of our work. cards: - id: 0 name: Amaranth type: HEX code: "DC244C" - id: 1 name: Blue type: HEX code: "2F6FF0" - id: 2 name: Violet type: HEX code: "8547FF" - id: 3 name: Teal type: HEX code: "038585" - id: 4 name: Black type: HEX code: "090E1A" - id: 5 name: White type: HEX code: "FFFFFF" typography: title: Typography description: Main typography is Satoshi, this is employed for both UI and marketing purposes. Headlines are set in Bold (600), while text is rendered in Medium (500). example: AaBb specimen: "ABCDEFGHIJKLMNOPQRSTUVWXYZ<br>abcdefghijklmnopqrstuvwxyz<br>0123456789 !@#$%^&*()" link: url: https://api.fontshare.com/v2/fonts/download/satoshi text: Download trademarks: title: Trademarks description: All features associated with the Qdrant brand are safeguarded by relevant trademark, copyright, and intellectual property regulations. Utilization of the Qdrant trademark must adhere to the specified Qdrant Trademark Standards for Use.<br><br>Should you require clarification or seek permission to utilize these resources, feel free to reach out to us at link: url: "mailto:info@qdrant.com" text: info@qdrant.com. sitemapExclude: true ---
brand-resources/brand-resources-content.md
--- title: Qdrant Brand Resources buttons: - id: 0 url: "#logo" text: Logo - id: 1 url: "#colors" text: Colors - id: 2 url: "#typography" text: Typography - id: 3 url: "#trademarks" text: Trademarks sitemapExclude: true ---
brand-resources/brand-resources-hero.md
--- title: brand-resources description: brand-resources build: render: always cascade: - build: list: local publishResources: false render: never ---
brand-resources/_index.md
--- title: Cloud Quickstart weight: 4 aliases: - quickstart-cloud - ../cloud-quick-start - cloud-quick-start - cloud-quickstart - cloud/quickstart-cloud/ --- # How to Get Started With Qdrant Cloud <p align="center"><iframe width="560" height="315" src="https://www.youtube.com/embed/g6uJhjAoNMg?si=EZ3OtmEdKKHIOgFy" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p> <p style="text-align: center;">You can try vector search on Qdrant Cloud in three steps. </br> Instructions are below, but the video is faster:</p> ## Setup a Qdrant Cloud cluster 1. Register for a [Cloud account](https://cloud.qdrant.io/) with your email, Google or Github credentials. 2. Go to **Overview** and follow the onboarding instructions under **Create First Cluster**. ![create a cluster](/docs/gettingstarted/gui-quickstart/create-cluster.png) 3. When you create it, you will receive an API key. You will need to copy and paste it soon. 4. Your new cluster will be created under **Clusters**. Give it a few moments to provision. ## Access the cluster dashboard 1. Go to your **Clusters**. Under **Actions**, open the **Dashboard**. 2. Paste your new API key here. If you lost it, make another in **Access Management**. 3. The key will grant you access to your Qdrant instance. Now you can see the cluster Dashboard. ![access the dashboard](/docs/gettingstarted/gui-quickstart/access-dashboard.png) ## Try the Tutorial sandbox 1. Open the interactive **Tutorial**. Here, you can test basic Qdrant API requests. 2. Using the **Quickstart** instructions, create a collection, add vectors and run a search. 3. The output on the right will show you some basic semantic search results. ![interactive-tutorial](/docs/gettingstarted/gui-quickstart/interactive-tutorial.png) ## That's vector search! You can stay in the sandbox and continue trying our different API calls.</br> When ready, use the Console and our complete REST API to try other operations. ## What's next? Now that you have a Qdrant Cloud cluster up and running, you should [test remote access](/documentation/cloud/authentication/#test-cluster-access) with a Qdrant Client.
documentation/quickstart-cloud.md
--- title: Release Notes weight: 24 type: external-link external_url: https://github.com/qdrant/qdrant/releases sitemapExclude: True ---
documentation/release-notes.md
--- title: Benchmarks weight: 33 draft: true ---
documentation/benchmarks.md
--- title: Community links weight: 42 draft: true --- # Community Contributions Though we do not officially maintain this content, we still feel that is is valuable and thank our dedicated contributors. | Link | Description | Stack | |------|------------------------------|--------| | [Pinecone to Qdrant Migration](https://github.com/NirantK/qdrant_tools) | Complete python toolset that supports migration between two products. | Qdrant, Pinecone | | [LlamaIndex Support for Qdrant](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/QdrantIndexDemo.html) | Documentation on common integrations with LlamaIndex. | Qdrant, LlamaIndex | | [Geo.Rocks Semantic Search Tutorial](https://geo.rocks/post/qdrant-transformers-js-semantic-search/) | Create a fully working semantic search stack with a built in search API and a minimal stack. | Qdrant, HuggingFace, SentenceTransformers, transformers.js |
documentation/community-links.md
--- title: Local Quickstart weight: 5 aliases: - quick_start - quick-start - quickstart --- # How to Get Started with Qdrant Locally In this short example, you will use the Python Client to create a Collection, load data into it and run a basic search query. <aside role="status">Before you start, please make sure Docker is installed and running on your system.</aside> ## Download and run First, download the latest Qdrant image from Dockerhub: ```bash docker pull qdrant/qdrant ``` Then, run the service: ```bash docker run -p 6333:6333 -p 6334:6334 \ -v $(pwd)/qdrant_storage:/qdrant/storage:z \ qdrant/qdrant ``` Under the default configuration all data will be stored in the `./qdrant_storage` directory. This will also be the only directory that both the Container and the host machine can both see. Qdrant is now accessible: - REST API: [localhost:6333](http://localhost:6333) - Web UI: [localhost:6333/dashboard](http://localhost:6333/dashboard) - GRPC API: [localhost:6334](http://localhost:6334) ## Initialize the client ```python from qdrant_client import QdrantClient client = QdrantClient(url="http://localhost:6333") ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); ``` ```rust use qdrant_client::Qdrant; // The Rust client uses Qdrant's gRPC interface let client = Qdrant::from_url("http://localhost:6334").build()?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; // The Java client uses Qdrant's gRPC interface QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); ``` ```csharp using Qdrant.Client; // The C# client uses Qdrant's gRPC interface var client = new QdrantClient("localhost", 6334); ``` ```go import "github.com/qdrant/go-client/qdrant" // The Go client uses Qdrant's gRPC interface client, err := qdrant.NewClient(&qdrant.Config{ Host: "localhost", Port: 6334, }) ``` <aside role="status">By default, Qdrant starts with no encryption or authentication . This means anyone with network access to your machine can access your Qdrant container instance. Please read <a href="/documentation/security/">Security</a> carefully for details on how to secure your instance.</aside> ## Create a collection You will be storing all of your vector data in a Qdrant collection. Let's call it `test_collection`. This collection will be using a dot product distance metric to compare vectors. ```python from qdrant_client.models import Distance, VectorParams client.create_collection( collection_name="test_collection", vectors_config=VectorParams(size=4, distance=Distance.DOT), ) ``` ```typescript await client.createCollection("test_collection", { vectors: { size: 4, distance: "Dot" }, }); ``` ```rust use qdrant_client::qdrant::{CreateCollectionBuilder, VectorParamsBuilder}; client .create_collection( CreateCollectionBuilder::new("test_collection") .vectors_config(VectorParamsBuilder::new(4, Distance::Dot)), ) .await?; ``` ```java import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; client.createCollectionAsync("test_collection", VectorParams.newBuilder().setDistance(Distance.Dot).setSize(4).build()).get(); ``` ```csharp using Qdrant.Client.Grpc; await client.CreateCollectionAsync(collectionName: "test_collection", vectorsConfig: new VectorParams { Size = 4, Distance = Distance.Dot }); ``` ```go import ( "context" "github.com/qdrant/go-client/qdrant" ) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: "{collection_name}", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 4, Distance: qdrant.Distance_Cosine, }), }) ``` ## Add vectors Let's now add a few vectors with a payload. Payloads are other data you want to associate with the vector: ```python from qdrant_client.models import PointStruct operation_info = client.upsert( collection_name="test_collection", wait=True, points=[ PointStruct(id=1, vector=[0.05, 0.61, 0.76, 0.74], payload={"city": "Berlin"}), PointStruct(id=2, vector=[0.19, 0.81, 0.75, 0.11], payload={"city": "London"}), PointStruct(id=3, vector=[0.36, 0.55, 0.47, 0.94], payload={"city": "Moscow"}), PointStruct(id=4, vector=[0.18, 0.01, 0.85, 0.80], payload={"city": "New York"}), PointStruct(id=5, vector=[0.24, 0.18, 0.22, 0.44], payload={"city": "Beijing"}), PointStruct(id=6, vector=[0.35, 0.08, 0.11, 0.44], payload={"city": "Mumbai"}), ], ) print(operation_info) ``` ```typescript const operationInfo = await client.upsert("test_collection", { wait: true, points: [ { id: 1, vector: [0.05, 0.61, 0.76, 0.74], payload: { city: "Berlin" } }, { id: 2, vector: [0.19, 0.81, 0.75, 0.11], payload: { city: "London" } }, { id: 3, vector: [0.36, 0.55, 0.47, 0.94], payload: { city: "Moscow" } }, { id: 4, vector: [0.18, 0.01, 0.85, 0.80], payload: { city: "New York" } }, { id: 5, vector: [0.24, 0.18, 0.22, 0.44], payload: { city: "Beijing" } }, { id: 6, vector: [0.35, 0.08, 0.11, 0.44], payload: { city: "Mumbai" } }, ], }); console.debug(operationInfo); ``` ```rust use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder}; let points = vec![ PointStruct::new(1, vec![0.05, 0.61, 0.76, 0.74], [("city", "Berlin".into())]), PointStruct::new(2, vec![0.19, 0.81, 0.75, 0.11], [("city", "London".into())]), PointStruct::new(3, vec![0.36, 0.55, 0.47, 0.94], [("city", "Moscow".into())]), // ..truncated ]; let response = client .upsert_points(UpsertPointsBuilder::new("test_collection", points).wait(true)) .await?; dbg!(response); ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.grpc.Points.PointStruct; import io.qdrant.client.grpc.Points.UpdateResult; UpdateResult operationInfo = client .upsertAsync( "test_collection", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f)) .putAllPayload(Map.of("city", value("Berlin"))) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors(vectors(0.19f, 0.81f, 0.75f, 0.11f)) .putAllPayload(Map.of("city", value("London"))) .build(), PointStruct.newBuilder() .setId(id(3)) .setVectors(vectors(0.36f, 0.55f, 0.47f, 0.94f)) .putAllPayload(Map.of("city", value("Moscow"))) .build())) // Truncated .get(); System.out.println(operationInfo); ``` ```csharp using Qdrant.Client.Grpc; var operationInfo = await client.UpsertAsync(collectionName: "test_collection", points: new List<PointStruct> { new() { Id = 1, Vectors = new float[] { 0.05f, 0.61f, 0.76f, 0.74f }, Payload = { ["city"] = "Berlin" } }, new() { Id = 2, Vectors = new float[] { 0.19f, 0.81f, 0.75f, 0.11f }, Payload = { ["city"] = "London" } }, new() { Id = 3, Vectors = new float[] { 0.36f, 0.55f, 0.47f, 0.94f }, Payload = { ["city"] = "Moscow" } }, // Truncated }); Console.WriteLine(operationInfo); ``` ```go import ( "context" "fmt" "github.com/qdrant/go-client/qdrant" ) operationInfo, err := client.Upsert(context.Background(), &qdrant.UpsertPoints{ CollectionName: "test_collection", Points: []*qdrant.PointStruct{ { Id: qdrant.NewIDNum(1), Vectors: qdrant.NewVectors(0.05, 0.61, 0.76, 0.74), Payload: qdrant.NewValueMap(map[string]any{"city": "Berlin"}), }, { Id: qdrant.NewIDNum(2), Vectors: qdrant.NewVectors(0.19, 0.81, 0.75, 0.11), Payload: qdrant.NewValueMap(map[string]any{"city": "London"}), }, { Id: qdrant.NewIDNum(3), Vectors: qdrant.NewVectors(0.36, 0.55, 0.47, 0.94), Payload: qdrant.NewValueMap(map[string]any{"city": "Moscow"}), }, // Truncated }, }) if err != nil { panic(err) } fmt.Println(operationInfo) ``` **Response:** ```python operation_id=0 status=<UpdateStatus.COMPLETED: 'completed'> ``` ```typescript { operation_id: 0, status: 'completed' } ``` ```rust PointsOperationResponse { result: Some( UpdateResult { operation_id: Some( 0, ), status: Completed, }, ), time: 0.00094027, } ``` ```java operation_id: 0 status: Completed ``` ```csharp { "operationId": "0", "status": "Completed" } ``` ```go operation_id:0 status:Acknowledged ``` ## Run a query Let's ask a basic question - Which of our stored vectors are most similar to the query vector `[0.2, 0.1, 0.9, 0.7]`? ```python search_result = client.query_points( collection_name="test_collection", query=[0.2, 0.1, 0.9, 0.7], limit=3 ).points print(search_result) ``` ```typescript let searchResult = await client.query( "test_collection", { query: [0.2, 0.1, 0.9, 0.7], limit: 3 }); console.debug(searchResult.points); ``` ```rust use qdrant_client::qdrant::QueryPointsBuilder; let search_result = client .query( QueryPointsBuilder::new("test_collection") .query(vec![0.2, 0.1, 0.9, 0.7]) ) .await?; dbg!(search_result); ``` ```java import java.util.List; import io.qdrant.client.grpc.Points.ScoredPoint; import io.qdrant.client.grpc.Points.QueryPoints; import static io.qdrant.client.QueryFactory.nearest; List<ScoredPoint> searchResult = client.queryAsync(QueryPoints.newBuilder() .setCollectionName("test_collection") .setLimit(3) .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .build()).get(); System.out.println(searchResult); ``` ```csharp var searchResult = await client.QueryAsync( collectionName: "test_collection", query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, limit: 3, ); Console.WriteLine(searchResult); ``` ```go import ( "context" "fmt" "github.com/qdrant/go-client/qdrant" ) searchResult, err := client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: "test_collection", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), }) if err != nil { panic(err) } fmt.Println(searchResult) ``` **Response:** ```json [ { "id": 4, "version": 0, "score": 1.362, "payload": null, "vector": null }, { "id": 1, "version": 0, "score": 1.273, "payload": null, "vector": null }, { "id": 3, "version": 0, "score": 1.208, "payload": null, "vector": null } ] ``` The results are returned in decreasing similarity order. Note that payload and vector data is missing in these results by default. See [payload and vector in the result](../concepts/search/#payload-and-vector-in-the-result) on how to enable it. ## Add a filter We can narrow down the results further by filtering by payload. Let's find the closest results that include "London". ```python from qdrant_client.models import Filter, FieldCondition, MatchValue search_result = client.query_points( collection_name="test_collection", query=[0.2, 0.1, 0.9, 0.7], query_filter=Filter( must=[FieldCondition(key="city", match=MatchValue(value="London"))] ), with_payload=True, limit=3, ).points print(search_result) ``` ```typescript searchResult = await client.query("test_collection", { query: [0.2, 0.1, 0.9, 0.7], filter: { must: [{ key: "city", match: { value: "London" } }], }, with_payload: true, limit: 3, }); console.debug(searchResult); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, QueryPointsBuilder}; let search_result = client .query( QueryPointsBuilder::new("test_collection") .query(vec![0.2, 0.1, 0.9, 0.7]) .filter(Filter::must([Condition::matches( "city", "London".to_string(), )])) .with_payload(true), ) .await?; dbg!(search_result); ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; List<ScoredPoint> searchResult = client.queryAsync(QueryPoints.newBuilder() .setCollectionName("test_collection") .setLimit(3) .setFilter(Filter.newBuilder().addMust(matchKeyword("city", "London"))) .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .setWithPayload(enable(true)) .build()).get(); System.out.println(searchResult); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; var searchResult = await client.QueryAsync( collectionName: "test_collection", query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, filter: MatchKeyword("city", "London"), limit: 3, payloadSelector: true ); Console.WriteLine(searchResult); ``` ```go import ( "context" "fmt" "github.com/qdrant/go-client/qdrant" ) searchResult, err := client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: "test_collection", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), Filter: &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch("city", "London"), }, }, WithPayload: qdrant.NewWithPayload(true), }) if err != nil { panic(err) } fmt.Println(searchResult) ``` **Response:** ```json [ { "id": 2, "version": 0, "score": 0.871, "payload": { "city": "London" }, "vector": null } ] ``` <aside role="status">To make filtered search fast on real datasets, we highly recommend to create <a href="../concepts/indexing/#payload-index">payload indexes</a>!</aside> You have just conducted vector search. You loaded vectors into a database and queried the database with a vector of your own. Qdrant found the closest results and presented you with a similarity score. ## Next steps Now you know how Qdrant works. Getting started with [Qdrant Cloud](../cloud/quickstart-cloud/) is just as easy. [Create an account](https://qdrant.to/cloud) and use our SaaS completely free. We will take care of infrastructure maintenance and software updates. To move onto some more complex examples of vector search, read our [Tutorials](../tutorials/) and create your own app with the help of our [Examples](../examples/). **Note:** There is another way of running Qdrant locally. If you are a Python developer, we recommend that you try Local Mode in [Qdrant Client](https://github.com/qdrant/qdrant-client), as it only takes a few moments to get setup.
documentation/quickstart.md
--- title: Qdrant Cloud API weight: 10 --- # Qdrant Cloud API The Qdrant Cloud API lets you manage Cloud accounts and their respective Qdrant clusters. You can use this API to manage your clusters, authentication methods, and cloud configurations. | REST API | Documentation | | -------- | ------------------------------------------------------------------------------------ | | v.0.1.0 | [OpenAPI Specification](https://cloud.qdrant.io/pa/v1/docs) | **Note:** This is not the Qdrant REST API. For core product APIs & SDKs, see our list of [interfaces](/documentation/interfaces/) ## Authentication: Connecting to Cloud API To interact with the Qdrant Cloud API, you must authenticate using an API key. Each request to the API must include the API key in the **Authorization** header. The API key acts as a bearer token and grants access to your account’s resources. You can create a Cloud API key in the Cloud Console UI. Go to **Access Management** > **Qdrant Cloud API Keys**. ![Authentication](/documentation/cloud/authentication.png) **Note:** Ensure that the API key is kept secure and not exposed in public repositories or logs. Once authenticated, the API allows you to manage clusters, collections, and perform other operations available to your account. ## Sample API Request Here's an example of a basic request to **list all clusters** in your Qdrant Cloud account: ```bash curl -X 'GET' \ 'https://cloud.qdrant.io/pa/v1/accounts/<YOUR_ACCOUNT_ID>/clusters' \ -H 'accept: application/json' \ -H 'Authorization: <YOUR_API_KEY>' ``` This request will return a list of clusters associated with your account in JSON format. ## Cluster Management Use these endpoints to create and manage your Qdrant database clusters. The API supports fine-grained control over cluster resources (CPU, RAM, disk), node configurations, tolerations, and other operational characteristics across all cloud providers (AWS, GCP, Azure) and their respective regions in Qdrant Cloud, as well as Hybrid Cloud. - **Get Cluster by ID**: Retrieve detailed information about a specific cluster using the cluster ID and associated account ID. - **Delete Cluster**: Remove a cluster, with optional deletion of backups. - **Update Cluster**: Apply modifications to a cluster's configuration. - **List Clusters**: Get all clusters associated with a specific account, filtered by region or other criteria. - **Create Cluster**: Add new clusters to the account with configurable parameters such as nodes, cloud provider, and regions. - **Get Booking**: Manage hosting across various cloud providers (AWS, GCP, Azure) and their respective regions. ## Cluster Authentication Management Use these endpoints to manage your cluster API keys. - **List API Keys**: Retrieve all API keys associated with an account. - **Create API Key**: Generate a new API key for programmatic access. - **Delete API Key**: Revoke access by deleting a specific API key. - **Update API Key**: Modify attributes of an existing API key.
documentation/qdrant-cloud-api.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "Getting Started" type: delimiter weight: 1 # Change this weight to change order of sections sitemapExclude: True _build: publishResources: false render: never ---
documentation/0-dl.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "Integrations" type: delimiter weight: 14 # Change this weight to change order of sections sitemapExclude: True _build: publishResources: false render: never ---
documentation/2-dl.md
--- title: Roadmap weight: 32 draft: true --- # Qdrant 2023 Roadmap Goals of the release: * **Maintain easy upgrades** - we plan to keep backward compatibility for at least one major version back. * That means that you can upgrade Qdrant without any downtime and without any changes in your client code within one major version. * Storage should be compatible between any two consequent versions, so you can upgrade Qdrant with automatic data migration between consecutive versions. * **Make billion-scale serving cheap** - qdrant already can serve billions of vectors, but we want to make it even more affordable. * **Easy scaling** - our plan is to make it easy to dynamically scale Qdrant, so you could go from 1 to 1B vectors seamlessly. * **Various similarity search scenarios** - we want to support more similarity search scenarios, e.g. sparse search, grouping requests, diverse search, etc. ## Milestones * :atom_symbol: Quantization support * [ ] Scalar quantization f32 -> u8 (4x compression) * [ ] Advanced quantization (8x and 16x compression) * [ ] Support for binary vectors --- * :arrow_double_up: Scalability * [ ] Automatic replication factor adjustment * [ ] Automatic shard distribution on cluster scaling * [ ] Repartitioning support --- * :eyes: Search scenarios * [ ] Diversity search - search for vectors that are different from each other * [ ] Sparse vectors search - search for vectors with a small number of non-zero values * [ ] Grouping requests - search within payload-defined groups * [ ] Different scenarios for recommendation API --- * Additionally * [ ] Extend full-text filtering support * [ ] Support for phrase queries * [ ] Support for logical operators * [ ] Simplify update of collection parameters
documentation/roadmap.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "Managed Services" type: delimiter weight: 7 # Change this weight to change order of sections sitemapExclude: True _build: publishResources: false render: never ---
documentation/4-dl.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "Examples" type: delimiter weight: 17 # Change this weight to change order of sections sitemapExclude: True _build: publishResources: false render: never ---
documentation/3-dl.md
--- title: Practice Datasets weight: 23 --- # Common Datasets in Snapshot Format You may find that creating embeddings from datasets is a very resource-intensive task. If you need a practice dataset, feel free to pick one of the ready-made snapshots on this page. These snapshots contain pre-computed vectors that you can easily import into your Qdrant instance. ## Available datasets Our snapshots are usually generated from publicly available datasets, which are often used for non-commercial or academic purposes. The following datasets are currently available. Please click on a dataset name to see its detailed description. | Dataset | Model | Vector size | Documents | Size | Qdrant snapshot | HF Hub | |--------------------------------------------|-----------------------------------------------------------------------------|-------------|-----------|--------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------| | [Arxiv.org titles](#arxivorg-titles) | [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) | 768 | 2.3M | 7.1 GB | [Download](https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings) | | [Arxiv.org abstracts](#arxivorg-abstracts) | [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) | 768 | 2.3M | 8.4 GB | [Download](https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/arxiv-abstracts-instructorxl-embeddings) | | [Wolt food](#wolt-food) | [clip-ViT-B-32](https://huggingface.co/sentence-transformers/clip-ViT-B-32) | 512 | 1.7M | 7.9 GB | [Download](https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/wolt-food-clip-ViT-B-32-embeddings) | Once you download a snapshot, you need to [restore it](/documentation/concepts/snapshots/#restore-snapshot) using the Qdrant CLI upon startup or through the API. ## Qdrant on Hugging Face <p align="center"> <a href="https://huggingface.co/Qdrant"> <img style="width: 500px; max-width: 100%;" src="/content/images/hf-logo-with-title.svg" alt="HuggingFace" title="HuggingFace"> </a> </p> [Hugging Face](https://huggingface.co/) provides a platform for sharing and using ML models and datasets. [Qdrant](https://huggingface.co/Qdrant) is one of the organizations there! We aim to provide you with datasets containing neural embeddings that you can use to practice with Qdrant and build your applications based on semantic search. **Please let us know if you'd like to see a specific dataset!** If you are not familiar with [Hugging Face datasets](https://huggingface.co/docs/datasets/index), or would like to know how to combine it with Qdrant, please refer to the [tutorial](/documentation/tutorials/huggingface-datasets/). ## Arxiv.org [Arxiv.org](https://arxiv.org) is a highly-regarded open-access repository of electronic preprints in multiple fields. Operated by Cornell University, arXiv allows researchers to share their findings with the scientific community and receive feedback before they undergo peer review for formal publication. Its archives host millions of scholarly articles, making it an invaluable resource for those looking to explore the cutting edge of scientific research. With a high frequency of daily submissions from scientists around the world, arXiv forms a comprehensive, evolving dataset that is ripe for mining, analysis, and the development of future innovations. <aside role="status"> Arxiv.org snapshots were created using precomputed embeddings exposed by <a href="https://alex.macrocosm.so/download">the Alexandria Index</a>. </aside> ### Arxiv.org titles This dataset contains embeddings generated from the paper titles only. Each vector has a payload with the title used to create it, along with the DOI (Digital Object Identifier). ```json { "title": "Nash Social Welfare for Indivisible Items under Separable, Piecewise-Linear Concave Utilities", "DOI": "1612.05191" } ``` The embeddings generated with InstructorXL model have been generated using the following instruction: > Represent the Research Paper title for retrieval; Input: The following code snippet shows how to generate embeddings using the InstructorXL model: ```python from InstructorEmbedding import INSTRUCTOR model = INSTRUCTOR("hkunlp/instructor-xl") sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments" instruction = "Represent the Research Paper title for retrieval; Input:" embeddings = model.encode([[instruction, sentence]]) ``` The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot). #### Importing the dataset The easiest way to use the provided dataset is to recover it via the API by passing the URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following code snippet shows how to create a new collection and fill it with the snapshot data: ```http request PUT /collections/{collection_name}/snapshots/recover { "location": "https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot" } ``` ### Arxiv.org abstracts This dataset contains embeddings generated from the paper abstracts. Each vector has a payload with the abstract used to create it, along with the DOI (Digital Object Identifier). ```json { "abstract": "Recently Cole and Gkatzelis gave the first constant factor approximation\nalgorithm for the problem of allocating indivisible items to agents, under\nadditive valuations, so as to maximize the Nash Social Welfare. We give\nconstant factor algorithms for a substantial generalization of their problem --\nto the case of separable, piecewise-linear concave utility functions. We give\ntwo such algorithms, the first using market equilibria and the second using the\ntheory of stable polynomials.\n In AGT, there is a paucity of methods for the design of mechanisms for the\nallocation of indivisible goods and the result of Cole and Gkatzelis seemed to\nbe taking a major step towards filling this gap. Our result can be seen as\nanother step in this direction.\n", "DOI": "1612.05191" } ``` The embeddings generated with InstructorXL model have been generated using the following instruction: > Represent the Research Paper abstract for retrieval; Input: The following code snippet shows how to generate embeddings using the InstructorXL model: ```python from InstructorEmbedding import INSTRUCTOR model = INSTRUCTOR("hkunlp/instructor-xl") sentence = "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train." instruction = "Represent the Research Paper abstract for retrieval; Input:" embeddings = model.encode([[instruction, sentence]]) ``` The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot). #### Importing the dataset The easiest way to use the provided dataset is to recover it via the API by passing the URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following code snippet shows how to create a new collection and fill it with the snapshot data: ```http request PUT /collections/{collection_name}/snapshots/recover { "location": "https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot" } ``` ## Wolt food Our [Food Discovery demo](https://food-discovery.qdrant.tech/) relies on the dataset of food images from the Wolt app. Each point in the collection represents a dish with a single image. The image is represented as a vector of 512 float numbers. There is also a JSON payload attached to each point, which looks similar to this: ```json { "cafe": { "address": "VGX7+6R2 Vecchia Napoli, Valletta", "categories": ["italian", "pasta", "pizza", "burgers", "mediterranean"], "location": {"lat": 35.8980154, "lon": 14.5145106}, "menu_id": "610936a4ee8ea7a56f4a372a", "name": "Vecchia Napoli Is-Suq Tal-Belt", "rating": 9, "slug": "vecchia-napoli-skyparks-suq-tal-belt" }, "description": "Tomato sauce, mozzarella fior di latte, crispy guanciale, Pecorino Romano cheese and a hint of chilli", "image": "https://wolt-menu-images-cdn.wolt.com/menu-images/610936a4ee8ea7a56f4a372a/005dfeb2-e734-11ec-b667-ced7a78a5abd_l_amatriciana_pizza_joel_gueller1.jpeg", "name": "L'Amatriciana" } ``` The embeddings generated with clip-ViT-B-32 model have been generated using the following code snippet: ```python from PIL import Image from sentence_transformers import SentenceTransformer image_path = "5dbfd216-5cce-11eb-8122-de94874ad1c8_ns_takeaway_seelachs_ei_baguette.jpeg" model = SentenceTransformer("clip-ViT-B-32") embedding = model.encode(Image.open(image_path)) ``` The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot). #### Importing the dataset The easiest way to use the provided dataset is to recover it via the API by passing the URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following code snippet shows how to create a new collection and fill it with the snapshot data: ```http request PUT /collections/{collection_name}/snapshots/recover { "location": "https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot" } ```
documentation/datasets.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "User Manual" type: delimiter weight: 10 # Change this weight to change order of sections sitemapExclude: True _build: publishResources: false render: never ---
documentation/1-dl.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "Support" type: delimiter weight: 21 # Change this weight to change order of sections sitemapExclude: True _build: publishResources: false render: never ---
documentation/5-dl.md
--- title: Home weight: 2 hideTOC: true --- # Documentation Qdrant is an AI-native vector dabatase and a semantic search engine. You can use it to extract meaningful information from unstructured data. Want to see how it works? [Clone this repo now](https://github.com/qdrant/qdrant_demo/) and build a search engine in five minutes. ||| |-:|:-| |[Cloud Quickstart](/documentation/quickstart-cloud/)|[Local Quickstart](/documentation/quick-start/)| ## Ready to start developing? ***<p style="text-align: center;">Qdrant is open-source and can be self-hosted. However, the quickest way to get started is with our [free tier](https://qdrant.to/cloud) on Qdrant Cloud. It scales easily and provides an UI where you can interact with data.</p>*** [![Hybrid Cloud](/docs/homepage/cloud-cta.png)](https://qdrant.to/cloud) ## Qdrant's most popular features: |||| |:-|:-|:-| |[Filtrable HNSW](/documentation/filtering/) </br> Single-stage payload filtering | [Recommendations & Context Search](/documentation/concepts/explore/#explore-the-data) </br> Exploratory advanced search| [Pure-Vector Hybrid Search](/documentation/hybrid-queries/)</br>Full text and semantic search in one| |[Multitenancy](/documentation/guides/multiple-partitions/) </br> Payload-based partitioning|[Custom Sharding](/documentation/guides/distributed_deployment/#sharding) </br> For data isolation and distribution|[Role Based Access Control](/documentation/guides/security/?q=jwt#granular-access-control-with-jwt)</br>Secure JWT-based access | |[Quantization](/documentation/guides/quantization/) </br> Compress data for drastic speedups|[Multivector Support](/documentation/concepts/vectors/?q=multivect#multivectors) </br> For ColBERT late interaction |[Built-in IDF](/documentation/concepts/indexing/?q=inverse+docu#idf-modifier) </br> Cutting-edge similarity calculation|
documentation/_index.md
--- title: Contribution Guidelines weight: 35 draft: true --- # How to contribute If you are a Qdrant user - Data Scientist, ML Engineer, or MLOps, the best contribution would be the feedback on your experience with Qdrant. Let us know whenever you have a problem, face an unexpected behavior, or see a lack of documentation. You can do it in any convenient way - create an [issue](https://github.com/qdrant/qdrant/issues), start a [discussion](https://github.com/qdrant/qdrant/discussions), or drop up a [message](https://discord.gg/tdtYvXjC4h). If you use Qdrant or Metric Learning in your projects, we'd love to hear your story! Feel free to share articles and demos in our community. For those familiar with Rust - check out our [contribution guide](https://github.com/qdrant/qdrant/blob/master/CONTRIBUTING.md). If you have problems with code or architecture understanding - reach us at any time. Feeling confident and want to contribute more? - Come to [work with us](https://qdrant.join.com/)!
documentation/contribution-guidelines.md
--- title: Bubble aliases: [ ../frameworks/bubble/ ] --- # Bubble [Bubble](https://bubble.io/) is a software development platform that enables anyone to build and launch fully functional web applications without writing code. You can use the [Qdrant Bubble plugin](https://bubble.io/plugin/qdrant-1716804374179x344999530386685950) to interface with Qdrant in your workflows. ## Prerequisites 1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/). 2. An account at [Bubble.io](https://bubble.io/) and an app set up. ## Setting up the plugin Navigate to your app's workflows. Select `"Install more plugins actions"`. ![Install New Plugin](/documentation/frameworks/bubble/install-bubble-plugin.png) You can now search for the Qdrant plugin and install it. Ensure all the categories are selected to perform a full search. ![Qdrant Plugin Search](/documentation/frameworks/bubble/qdrant-plugin-search.png) The Qdrant plugin can now be found in the installed plugins section of your workflow. Enter the API key of your Qdrant instance for authentication. ![Qdrant Plugin Home](/documentation/frameworks/bubble/qdrant-plugin-home.png) The plugin provides actions for upserting, searching, updating and deleting points from your Qdrant collection with dynamic and static values from your Bubble workflow. ## Further Reading - [Bubble Academy](https://bubble.io/academy). - [Bubble Manual](https://manual.bubble.io/)
documentation/platforms/bubble.md
--- title: Make.com aliases: [ ../frameworks/make/ ] --- # Make.com [Make](https://www.make.com/) is a platform for anyone to design, build, and automate anything—from tasks and workflows to apps and systems without code. Find the comprehensive list of available Make apps [here](https://www.make.com/en/integrations). Qdrant is available as an [app](https://www.make.com/en/integrations/qdrant) within Make to add to your scenarios. ![Qdrant Make hero](/documentation/frameworks/make/hero-page.png) ## Prerequisites Before you start, make sure you have the following: 1. A Qdrant instance to connect to. You can get free cloud instance [cloud.qdrant.io](https://cloud.qdrant.io/). 2. An account at Make.com. You can register yourself [here](https://www.make.com/en/register). ## Setting up a connection Navigate to your scenario on the Make dashboard and select a Qdrant app module to start a connection. ![Qdrant Make connection](/documentation/frameworks/make/connection.png) You can now establish a connection to Qdrant using your [instance credentials](/documentation/cloud/authentication/). ![Qdrant Make form](/documentation/frameworks/make/connection-form.png) ## Modules Modules represent actions that Make performs with an app. The Qdrant Make app enables you to trigger the following app modules. ![Qdrant Make modules](/documentation/frameworks/make/modules.png) The modules support mapping to connect the data retrieved by one module to another module to perform the desired action. You can read more about the data processing options available for the modules in the [Make reference](https://www.make.com/en/help/modules). ## Next steps - Find a list of Make workflow templates to connect with Qdrant [here](https://www.make.com/en/templates). - Make scenario reference docs can be found [here](https://www.make.com/en/help/scenarios).
documentation/platforms/make.md
--- title: Portable.io aliases: [ ../frameworks/portable/ ] --- # Portable [Portable](https://portable.io/) is an ELT platform that builds connectors on-demand for data teams. It enables connecting applications to your data warehouse with no code. You can avail the [Qdrant connector](https://portable.io/connectors/qdrant) to build data pipelines from your collections. ![Qdrant Connector](/documentation/frameworks/portable/home.png) ## Prerequisites 1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/). 2. A [Portable account](https://app.portable.io/). ## Setting up the connector Navigate to the Portable dashboard. Search for `"Qdrant"` in the sources section. ![Install New Source](/documentation/frameworks/portable/install.png) Configure the connector with your Qdrant instance credentials. ![Configure connector](/documentation/frameworks/portable/configure.png) You can now build your flows using data from Qdrant by selecting a [destination](https://app.portable.io/destinations) and scheduling it. ## Further Reading - [Portable API Reference](https://developer.portable.io/api-reference/introduction). - [Portable Academy](https://portable.io/learn)
documentation/platforms/portable.md
--- title: BuildShip aliases: [ ../frameworks/buildship/ ] --- # BuildShip [BuildShip](https://buildship.com/) is a low-code visual builder to create APIs, scheduled jobs, and backend workflows with AI assitance. You can use the [Qdrant integration](https://buildship.com/integrations/qdrant) to development workflows with semantic-search capabilites. ## Prerequisites 1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/). 2. A [BuildsShip](https://buildship.app/) for developing workflows. ## Nodes Nodes are are fundamental building blocks of BuildShip. Each responsible for an operation in your workflow. The Qdrant integration includes the following nodes with extensibility if required. ### Add Point ![Add Point](/documentation/frameworks/buildship/add.png) ### Retrieve Points ![Retrieve Points](/documentation/frameworks/buildship/get.png) ### Delete Points ![Delete Points](/documentation/frameworks/buildship/delete.png) ### Search Points ![Search Points](/documentation/frameworks/buildship/search.png) ## Further Reading - [BuildShip Docs](https://docs.buildship.com/basics/node). - [BuildShip Integrations](https://buildship.com/integrations)
documentation/platforms/buildship.md
--- title: Apify aliases: [ ../frameworks/apify/ ] --- # Apify [Apify](https://apify.com/) is a web scraping and browser automation platform featuring an [app store](https://apify.com/store) with over 1,500 pre-built micro-apps known as Actors. These serverless cloud programs, which are essentially dockers under the hood, are designed for various web automation applications, including data collection. One such Actor, built especially for AI and RAG applications, is [Website Content Crawler](https://apify.com/apify/website-content-crawler). It's ideal for this purpose because it has built-in HTML processing and data-cleaning functions. That means you can easily remove fluff, duplicates, and other things on a web page that aren't relevant, and provide only the necessary data to the language model. The Markdown can then be used to feed Qdrant to train AI models or supply them with fresh web content. Qdrant is available as an [official integration](https://apify.com/apify/qdrant-integration) to load Apify datasets into a collection. You can refer to the [Apify documentation](https://docs.apify.com/platform/integrations/qdrant) to set up the integration via the Apify UI. ## Programmatic Usage Apify also supports programmatic access to integrations via the [Apify Python SDK](https://docs.apify.com/sdk/python/). 1. Install the Apify Python SDK by running the following command: ```sh pip install apify-client ``` 2. Create a Python script and import all the necessary modules: ```python from apify_client import ApifyClient APIFY_API_TOKEN = "YOUR-APIFY-TOKEN" OPENAI_API_KEY = "YOUR-OPENAI-API-KEY" # COHERE_API_KEY = "YOUR-COHERE-API-KEY" QDRANT_URL = "YOUR-QDRANT-URL" QDRANT_API_KEY = "YOUR-QDRANT-API-KEY" client = ApifyClient(APIFY_API_TOKEN) ``` 3. Call the [Website Content Crawler](https://apify.com/apify/website-content-crawler) Actor to crawl the Qdrant documentation and extract text content from the web pages: ```python actor_call = client.actor("apify/website-content-crawler").call( run_input={"startUrls": [{"url": "https://qdrant.tech/documentation/"}]} ) ``` 4. Call the Qdrant integration and store all data in the Qdrant Vector Database: ```python qdrant_integration_inputs = { "qdrantUrl": QDRANT_URL, "qdrantApiKey": QDRANT_API_KEY, "qdrantCollectionName": "apify", "qdrantAutoCreateCollection": True, "datasetId": actor_call["defaultDatasetId"], "datasetFields": ["text"], "enableDeltaUpdates": True, "deltaUpdatesPrimaryDatasetFields": ["url"], "expiredObjectDeletionPeriodDays": 30, "embeddingsProvider": "OpenAI", # "Cohere" "embeddingsApiKey": OPENAI_API_KEY, "performChunking": True, "chunkSize": 1000, "chunkOverlap": 0, } actor_call = client.actor("apify/qdrant-integration").call(run_input=qdrant_integration_inputs) ``` Upon running the script, the data from <https://qdrant.tech/documentation/> will be scraped, transformed into vector embeddings and stored in the Qdrant collection. ## Further Reading - Apify [Documentation](https://docs.apify.com/) - Apify [Templates](https://apify.com/templates) - Integration [Source Code](https://github.com/apify/actor-vector-database-integrations)
documentation/platforms/apify.md
--- title: PrivateGPT aliases: [ ../integrations/privategpt/, ../frameworks/privategpt/ ] --- # PrivateGPT [PrivateGPT](https://docs.privategpt.dev/) is a production-ready AI project that allows you to inquire about your documents using Large Language Models (LLMs) with offline support. PrivateGPT uses Qdrant as the default vectorstore for ingesting and retrieving documents. ## Configuration Qdrant settings can be configured by setting values to the qdrant property in the `settings.yaml` file. By default, Qdrant tries to connect to an instance at http://localhost:3000. Example: ```yaml qdrant: url: "https://xyz-example.eu-central.aws.cloud.qdrant.io:6333" api_key: "<your-api-key>" ``` The available [configuration options](https://docs.privategpt.dev/manual/storage/vector-stores#qdrant-configuration) are: | Field | Description | |--------------|-------------| | location | If `:memory:` - use in-memory Qdrant instance.<br>If `str` - use it as a `url` parameter.| | url | Either host or str of `Optional[scheme], host, Optional[port], Optional[prefix]`.<br> Eg. `http://localhost:6333` | | port | Port of the REST API interface. Default: `6333` | | grpc_port | Port of the gRPC interface. Default: `6334` | | prefer_grpc | If `true` - use gRPC interface whenever possible in custom methods. | | https | If `true` - use HTTPS(SSL) protocol.| | api_key | API key for authentication in Qdrant Cloud.| | prefix | If set, add `prefix` to the REST URL path.<br>Example: `service/v1` will result in `http://localhost:6333/service/v1/{qdrant-endpoint}` for REST API.| | timeout | Timeout for REST and gRPC API requests.<br>Default: 5.0 seconds for REST and unlimited for gRPC | | host | Host name of Qdrant service. If url and host are not set, defaults to 'localhost'.| | path | Persistence path for QdrantLocal. Eg. `local_data/private_gpt/qdrant`| | force_disable_check_same_thread | Force disable check_same_thread for QdrantLocal sqlite connection.| ## Next steps Find the PrivateGPT docs [here](https://docs.privategpt.dev/).
documentation/platforms/privategpt.md
--- title: Pipedream aliases: [ ../frameworks/pipedream/ ] --- # Pipedream [Pipedream](https://pipedream.com/) is a development platform that allows developers to connect many different applications, data sources, and APIs in order to build automated cross-platform workflows. It also offers code-level control with Node.js, Python, Go, or Bash if required. You can use the [Qdrant app](https://pipedream.com/apps/qdrant) in Pipedream to add vector search capabilities to your workflows. ## Prerequisites 1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/). 2. A [Pipedream project](https://pipedream.com/) to develop your workflows. ## Setting Up Search for the Qdrant app in your workflow apps. ![Qdrant Pipedream App](/documentation/frameworks/pipedream/qdrant-app.png) The Qdrant app offers extensible API interface and pre-built actions. ![Qdrant App Features](/documentation/frameworks/pipedream/app-features.png) Select any of the actions of the app to set up a connection. ![Qdrant Connect Account](/documentation/frameworks/pipedream/app-upsert-action.png) Configure connection with the credentials of your Qdrant instance. ![Qdrant Connection Credentials](/documentation/frameworks/pipedream/app-connection.png) You can verify your credentials using the "Test Connection" button. Once a connection is set up, you can use the app to build workflows with the [2000+ apps supported by Pipedream](https://pipedream.com/apps/). ## Further Reading - [Pipedream Documentation](https://pipedream.com/docs). - [Qdrant Cloud Authentication](https://qdrant.tech/documentation/cloud/authentication/). - [Source Code](https://github.com/PipedreamHQ/pipedream/tree/master/components/qdrant)
documentation/platforms/pipedream.md
--- title: Ironclad Rivet aliases: [ ../frameworks/rivet/ ] --- # Ironclad Rivet [Rivet](https://rivet.ironcladapp.com/) is an Integrated Development Environment (IDE) and library designed for creating AI agents using a visual, graph-based interface. Qdrant is available as a [plugin](https://github.com/qdrant/rivet-plugin-qdrant) for building vector-search powered workflows in Rivet. ## Installation - Open the plugins overlay at the top of the screen. - Search for the official Qdrant plugin. - Click the "Add" button to install it in your current project. ![Rivet plugin installation](/documentation/frameworks/rivet/installation.png) ## Setting up the connection You can configure your Qdrant instance credentials in the Rivet settings after installing the plugin. ![Rivet plugin connection](/documentation/frameworks/rivet/connection.png) Once you've configured your credentials, you can right-click on your workspace to add nodes from the plugin and get building! ![Rivet plugin nodes](/documentation/frameworks/rivet/node.png) ## Further Reading - Rivet [Tutorial](https://rivet.ironcladapp.com/docs/tutorial). - Rivet [Documentation](https://rivet.ironcladapp.com/docs). - Plugin [Source Code](https://github.com/qdrant/rivet-plugin-qdrant)
documentation/platforms/rivet.md
--- title: DocsGPT aliases: [ ../frameworks/docsgpt/ ] --- # DocsGPT [DocsGPT](https://docsgpt.arc53.com/) is an open-source documentation assistant that enables you to build conversational user experiences on top of your data. Qdrant is supported as a vectorstore in DocsGPT to ingest and semantically retrieve documents. ## Configuration Learn how to setup DocsGPT in their [Quickstart guide](https://docs.docsgpt.co.uk/Deploying/Quickstart). You can configure DocsGPT with environment variables in a `.env` file. To configure DocsGPT to use Qdrant as the vector store, set `VECTOR_STORE` to `"qdrant"`. ```bash echo "VECTOR_STORE=qdrant" >> .env ``` DocsGPT includes a list of the Qdrant configuration options that you can set as environment variables [here](https://github.com/arc53/DocsGPT/blob/00dfb07b15602319bddb95089e3dab05fac56240/application/core/settings.py#L46-L59). ## Further reading - [DocsGPT Reference](https://github.com/arc53/DocsGPT)
documentation/platforms/docsgpt.md
--- title: Platforms weight: 15 --- ## Platform Integrations | Platform | Description | | ------------------------------------- | ---------------------------------------------------------------------------------------------------- | | [Apify](./apify/) | Platform to build web scrapers and automate web browser tasks. | | [Bubble](./bubble) | Development platform for application development with a no-code interface | | [BuildShip](./buildship) | Low-code visual builder to create APIs, scheduled jobs, and backend workflows. | | [DocsGPT](./docsgpt/) | Tool for ingesting documentation sources and enabling conversations and queries. | | [Make](./make/) | Cloud platform to build low-code workflows by integrating various software applications. | | [N8N](./n8n/) | Platform for node-based, low-code workflow automation. | | [Pipedream](./pipedream/) | Platform for connecting apps and developing event-driven automation. | | [Portable.io](./portable/) | Cloud platform for developing and deploying ELT transformations. | | [PrivateGPT](./privategpt/) | Tool to ask questions about your documents using local LLMs emphasising privacy. | | [Rivet](./rivet/) | A visual programming environment for building AI agents with LLMs. |
documentation/platforms/_index.md
--- title: N8N aliases: [ ../frameworks/n8n/ ] --- # N8N [N8N](https://n8n.io/) is an automation platform that allows you to build flexible workflows focused on deep data integration. Qdrant is available as a vectorstore node in N8N for building AI-powered functionality within your workflows. ## Prerequisites 1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/). 2. A running N8N instance. You can learn more about using the N8N cloud or self-hosting [here](https://docs.n8n.io/choose-n8n/). ## Setting up the vectorstore Select the Qdrant vectorstore from the list of nodes in your workflow editor. ![Qdrant n8n node](/documentation/frameworks/n8n/node.png) You can now configure the vectorstore node according to your workflow requirements. The configuration options reference can be found [here](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/#node-parameters). ![Qdrant Config](/documentation/frameworks/n8n/config.png) Create a connection to Qdrant using your [instance credentials](/documentation/cloud/authentication/). ![Qdrant Credentials](/documentation/frameworks/n8n/credentials.png) The vectorstore supports the following operations: - Get Many - Get the top-ranked documents for a query. - Insert documents - Add documents to the vectorstore. - Retrieve documents - Retrieve documents for use with AI nodes. ## Further Reading - N8N vectorstore [reference](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/). - N8N AI-based workflows [reference](https://n8n.io/integrations/basic-llm-chain/). - [Source Code](https://github.com/n8n-io/n8n/tree/master/packages/@n8n/nodes-langchain/nodes/vector_store/VectorStoreQdrant)
documentation/platforms/n8n.md
--- title: Semantic Querying with Airflow and Astronomer weight: 36 aliases: - /documentation/examples/qdrant-airflow-astronomer/ --- # Semantic Querying with Airflow and Astronomer | Time: 45 min | Level: Intermediate | | | | ------------ | ------------------- | --- | --- | In this tutorial, you will use Qdrant as a [provider](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/index.html) in [Apache Airflow](https://airflow.apache.org/), an open-source tool that lets you setup data-engineering workflows. You will write the pipeline as a DAG (Directed Acyclic Graph) in Python. With this, you can leverage the powerful suite of Python's capabilities and libraries to achieve almost anything your data pipeline needs. [Astronomer](https://www.astronomer.io/) is a managed platform that simplifies the process of developing and deploying Airflow projects via its easy-to-use CLI and extensive automation capabilities. Airflow is useful when running operations in Qdrant based on data events or building parallel tasks for generating vector embeddings. By using Airflow, you can set up monitoring and alerts for your pipelines for full observability. ## Prerequisites Please make sure you have the following ready: - A running Qdrant instance. We'll be using a free instance from <https://cloud.qdrant.io> - The Astronomer CLI. Find the installation instructions [here](https://docs.astronomer.io/astro/cli/install-cli). - A [HuggingFace token](https://huggingface.co/docs/hub/en/security-tokens) to generate embeddings. ## Implementation We'll be building a DAG that generates embeddings in parallel for our data corpus and performs semantic retrieval based on user input. ### Set up the project The Astronomer CLI makes it very straightforward to set up the Airflow project: ```console mkdir qdrant-airflow-tutorial && cd qdrant-airflow-tutorial astro dev init ``` This command generates all of the project files you need to run Airflow locally. You can find a directory called `dags`, which is where we can place our Python DAG files. To use Qdrant within Airflow, install the Qdrant Airflow provider by adding the following to the `requirements.txt` file ```text apache-airflow-providers-qdrant ``` ### Configure credentials We can set up provider connections using the Airflow UI, environment variables or the `airflow_settings.yml` file. Add the following to the `.env` file in the project. Replace the values as per your credentials. ```env HUGGINGFACE_TOKEN="<YOUR_HUGGINGFACE_ACCESS_TOKEN>" AIRFLOW_CONN_QDRANT_DEFAULT='{ "conn_type": "qdrant", "host": "xyz-example.eu-central.aws.cloud.qdrant.io:6333", "password": "<YOUR_QDRANT_API_KEY>" }' ``` ### Add the data corpus Let's add some sample data to work with. Paste the following content into a file called `books.txt` file within the `include` directory. ```text 1 | To Kill a Mockingbird (1960) | fiction | Harper Lee's Pulitzer Prize-winning novel explores racial injustice and moral growth through the eyes of young Scout Finch in the Deep South. 2 | Harry Potter and the Sorcerer's Stone (1997) | fantasy | J.K. Rowling's magical tale follows Harry Potter as he discovers his wizarding heritage and attends Hogwarts School of Witchcraft and Wizardry. 3 | The Great Gatsby (1925) | fiction | F. Scott Fitzgerald's classic novel delves into the glitz, glamour, and moral decay of the Jazz Age through the eyes of narrator Nick Carraway and his enigmatic neighbour, Jay Gatsby. 4 | 1984 (1949) | dystopian | George Orwell's dystopian masterpiece paints a chilling picture of a totalitarian society where individuality is suppressed and the truth is manipulated by a powerful regime. 5 | The Catcher in the Rye (1951) | fiction | J.D. Salinger's iconic novel follows disillusioned teenager Holden Caulfield as he navigates the complexities of adulthood and society's expectations in post-World War II America. 6 | Pride and Prejudice (1813) | romance | Jane Austen's beloved novel revolves around the lively and independent Elizabeth Bennet as she navigates love, class, and societal expectations in Regency-era England. 7 | The Hobbit (1937) | fantasy | J.R.R. Tolkien's adventure follows Bilbo Baggins, a hobbit who embarks on a quest with a group of dwarves to reclaim their homeland from the dragon Smaug. 8 | The Lord of the Rings (1954-1955) | fantasy | J.R.R. Tolkien's epic fantasy trilogy follows the journey of Frodo Baggins to destroy the One Ring and defeat the Dark Lord Sauron in the land of Middle-earth. 9 | The Alchemist (1988) | fiction | Paulo Coelho's philosophical novel follows Santiago, an Andalusian shepherd boy, on a journey of self-discovery and spiritual awakening as he searches for a hidden treasure. 10 | The Da Vinci Code (2003) | mystery/thriller | Dan Brown's gripping thriller follows symbologist Robert Langdon as he unravels clues hidden in art and history while trying to solve a murder mystery with far-reaching implications. ``` Now, the hacking part - writing our Airflow DAG! ### Write the dag We'll add the following content to a `books_recommend.py` file within the `dags` directory. Let's go over what it does for each task. ```python import os import requests from airflow.decorators import dag, task from airflow.models.baseoperator import chain from airflow.models.param import Param from airflow.providers.qdrant.hooks.qdrant import QdrantHook from airflow.providers.qdrant.operators.qdrant import QdrantIngestOperator from pendulum import datetime from qdrant_client import models QDRANT_CONNECTION_ID = "qdrant_default" DATA_FILE_PATH = "include/books.txt" COLLECTION_NAME = "airflow_tutorial_collection" EMBEDDING_MODEL_ID = "sentence-transformers/all-MiniLM-L6-v2" EMBEDDING_DIMENSION = 384 SIMILARITY_METRIC = models.Distance.COSINE def embed(text: str) -> list: HUGGINFACE_URL = f"https://api-inference.huggingface.co/pipeline/feature-extraction/{EMBEDDING_MODEL_ID}" response = requests.post( HUGGINFACE_URL, headers={"Authorization": f"Bearer {os.getenv('HUGGINGFACE_TOKEN')}"}, json={"inputs": [text], "options": {"wait_for_model": True}}, ) return response.json()[0] @dag( dag_id="books_recommend", start_date=datetime(2023, 10, 18), schedule=None, catchup=False, params={"preference": Param("Something suspenseful and thrilling.", type="string")}, ) def recommend_book(): @task def import_books(text_file_path: str) -> list: data = [] with open(text_file_path, "r") as f: for line in f: _, title, genre, description = line.split("|") data.append( { "title": title.strip(), "genre": genre.strip(), "description": description.strip(), } ) return data @task def init_collection(): hook = QdrantHook(conn_id=QDRANT_CONNECTION_ID) if not hook.conn..collection_exists(COLLECTION_NAME): hook.conn.create_collection( COLLECTION_NAME, vectors_config=models.VectorParams( size=EMBEDDING_DIMENSION, distance=SIMILARITY_METRIC ), ) @task def embed_description(data: dict) -> list: return embed(data["description"]) books = import_books(text_file_path=DATA_FILE_PATH) embeddings = embed_description.expand(data=books) qdrant_vector_ingest = QdrantIngestOperator( conn_id=QDRANT_CONNECTION_ID, task_id="qdrant_vector_ingest", collection_name=COLLECTION_NAME, payload=books, vectors=embeddings, ) @task def embed_preference(**context) -> list: user_mood = context["params"]["preference"] response = embed(text=user_mood) return response @task def search_qdrant( preference_embedding: list, ) -> None: hook = QdrantHook(conn_id=QDRANT_CONNECTION_ID) result = hook.conn.query_points( collection_name=COLLECTION_NAME, query=preference_embedding, limit=1, with_payload=True, ).points print("Book recommendation: " + result[0].payload["title"]) print("Description: " + result[0].payload["description"]) chain( init_collection(), qdrant_vector_ingest, search_qdrant(embed_preference()), ) recommend_book() ``` `import_books`: This task reads a text file containing information about the books (like title, genre, and description), and then returns the data as a list of dictionaries. `init_collection`: This task initializes a collection in the Qdrant database, where we will store the vector representations of the book descriptions. `embed_description`: This is a dynamic task that creates one mapped task instance for each book in the list. The task uses the `embed` function to generate vector embeddings for each description. To use a different embedding model, you can adjust the `EMBEDDING_MODEL_ID`, `EMBEDDING_DIMENSION` values. `embed_user_preference`: Here, we take a user's input and convert it into a vector using the same pre-trained model used for the book descriptions. `qdrant_vector_ingest`: This task ingests the book data into the Qdrant collection using the [QdrantIngestOperator](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/1.0.0/), associating each book description with its corresponding vector embeddings. `search_qdrant`: Finally, this task performs a search in the Qdrant database using the vectorized user preference. It finds the most relevant book in the collection based on vector similarity. ### Run the DAG Head over to your terminal and run ```astro dev start``` A local Airflow container should spawn. You can now access the Airflow UI at <http://localhost:8080>. Visit our DAG by clicking on `books_recommend`. ![DAG](/documentation/examples/airflow/demo-dag.png) Hit the PLAY button on the right to run the DAG. You'll be asked for input about your preference, with the default value already filled in. ![Preference](/documentation/examples/airflow/preference-input.png) After your DAG run completes, you should be able to see the output of your search in the logs of the `search_qdrant` task. ![Output](/documentation/examples/airflow/output.png) There you have it, an Airflow pipeline that interfaces with Qdrant! Feel free to fiddle around and explore Airflow. There are references below that might come in handy. ## Further reading - [Introduction to Airflow](https://docs.astronomer.io/learn/intro-to-airflow) - [Airflow Concepts](https://docs.astronomer.io/learn/category/airflow-concepts) - [Airflow Reference](https://airflow.apache.org/docs/) - [Astronomer Documentation](https://docs.astronomer.io/)
documentation/send-data/qdrant-airflow-astronomer.md
--- title: Qdrant on Databricks weight: 36 aliases: - /documentation/examples/databricks/ --- # Qdrant on Databricks | Time: 30 min | Level: Intermediate | [Complete Notebook](https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4750876096379825/93425612168199/6949977306828869/latest.html) | | ------------ | ------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | [Databricks](https://www.databricks.com/) is a unified analytics platform for working with big data and AI. It's built around Apache Spark, a powerful open-source distributed computing system well-suited for processing large-scale datasets and performing complex analytics tasks. Apache Spark is designed to scale horizontally, meaning it can handle expensive operations like generating vector embeddings by distributing computation across a cluster of machines. This scalability is crucial when dealing with large datasets. In this example, we will demonstrate how to vectorize a dataset with dense and sparse embeddings using Qdrant's [FastEmbed](https://qdrant.github.io/fastembed/) library. We will then load this vectorized data into a Qdrant cluster using the [Qdrant Spark connector](/documentation/frameworks/spark/) on Databricks. ### Setting up a Databricks project - Set up a **[Databricks cluster](https://docs.databricks.com/en/compute/configure.html)** following the official documentation guidelines. - Install the **[Qdrant Spark connector](/documentation/frameworks/spark/)** as a library: - Navigate to the `Libraries` section in your cluster dashboard. - Click on `Install New` at the top-right to open the library installation modal. - Search for `io.qdrant:spark:VERSION` in the Maven packages and click on `Install`. ![Install the library](/documentation/examples/databricks/library-install.png) - Create a new **[Databricks notebook](https://docs.databricks.com/en/notebooks/index.html)** on your cluster to begin working with your data and libraries. ### Download a dataset - **Install the required dependencies:** ```python %pip install fastembed datasets ``` - **Download the dataset:** ```python from datasets import load_dataset dataset_name = "tasksource/med" dataset = load_dataset(dataset_name, split="train") # We'll use the first 100 entries from this dataset and exclude some unused columns. dataset = dataset.select(range(100)).remove_columns(["gold_label", "genre"]) ``` - **Convert the dataset into a Spark dataframe:** ```python dataset.to_parquet("/dbfs/pq.pq") dataset_df = spark.read.parquet("file:/dbfs/pq.pq") ``` ### Vectorizing the data In this section, we'll be generating both dense and sparse vectors for our rows using [FastEmbed](https://qdrant.github.io/fastembed/). We'll create a user-defined function (UDF) to handle this step. #### Creating the vectorization function ```python from fastembed import TextEmbedding, SparseTextEmbedding def vectorize(partition_data): # Initialize dense and sparse models dense_model = TextEmbedding(model_name="BAAI/bge-small-en-v1.5") sparse_model = SparseTextEmbedding(model_name="Qdrant/bm25") for row in partition_data: # Generate dense and sparse vectors dense_vector = next(dense_model.embed(row.sentence1)) sparse_vector = next(sparse_model.embed(row.sentence2)) yield [ row.sentence1, # 1st column: original text row.sentence2, # 2nd column: original text dense_vector.tolist(), # 3rd column: dense vector sparse_vector.indices.tolist(), # 4th column: sparse vector indices sparse_vector.values.tolist(), # 5th column: sparse vector values ] ``` We're using the [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) model for dense embeddings and [BM25](https://huggingface.co/Qdrant/bm25) for sparse embeddings. #### Applying the UDF on our dataframe Next, let's apply our `vectorize` UDF on our Spark dataframe to generate embeddings. ```python embeddings = dataset_df.rdd.mapPartitions(vectorize) ``` The `mapPartitions()` method returns a [Resilient Distributed Dataset (RDD)](https://www.databricks.com/glossary/what-is-rdd) which should then be converted back to a Spark dataframe. #### Building the new Spark dataframe with the vectorized data We'll now create a new Spark dataframe (`embeddings_df`) with the vectorized data using the specified schema. ```python from pyspark.sql.types import StructType, StructField, StringType, ArrayType, FloatType, IntegerType # Define the schema for the new dataframe schema = StructType([ StructField("sentence1", StringType()), StructField("sentence2", StringType()), StructField("dense_vector", ArrayType(FloatType())), StructField("sparse_vector_indices", ArrayType(IntegerType())), StructField("sparse_vector_values", ArrayType(FloatType())) ]) # Create the new dataframe with the vectorized data embeddings_df = spark.createDataFrame(data=embeddings, schema=schema) ``` ### Uploading the data to Qdrant - **Create a Qdrant collection:** - [Follow the documentation](/documentation/concepts/collections/#create-a-collection) to create a collection with the appropriate configurations. Here's an example request to support both dense and sparse vectors: ```json PUT /collections/{collection_name} { "vectors": { "dense": { "size": 384, "distance": "Cosine" } }, "sparse_vectors": { "sparse": {} } } ``` - **Upload the dataframe to Qdrant:** ```python options = { "qdrant_url": "<QDRANT_GRPC_URL>", "api_key": "<QDRANT_API_KEY>", "collection_name": "<QDRANT_COLLECTION_NAME>", "vector_fields": "dense_vector", "vector_names": "dense", "sparse_vector_value_fields": "sparse_vector_values", "sparse_vector_index_fields": "sparse_vector_indices", "sparse_vector_names": "sparse", "schema": embeddings_df.schema.json(), } embeddings_df.write.format("io.qdrant.spark.Qdrant").options(**options).mode( "append" ).save() ``` <aside role="status"> <p>You can find the list of the Spark connector configuration options <a href="/documentation/frameworks/spark/#configuration-options" target="_blank">here</a>.</p> </aside> Ensure to replace the placeholder values (`<QDRANT_GRPC_URL>`, `<QDRANT_API_KEY>`, `<QDRANT_COLLECTION_NAME>`) with your actual values. If the `id_field` option is not specified, Qdrant Spark connector generates random UUIDs for each point. The command output you should see is similar to: ```console Command took 40.37 seconds -- by xxxxx90@xxxxxx.com at 4/17/2024, 12:13:28 PM on fastembed ``` ### Conclusion That wraps up our tutorial! Feel free to explore more functionalities and experiments with different models, parameters, and features available in Databricks, Spark, and Qdrant. Happy data engineering!
documentation/send-data/databricks.md
--- title: How to Setup Seamless Data Streaming with Kafka and Qdrant weight: 49 aliases: - /examples/data-streaming-kafka-qdrant/ --- # Setup Data Streaming with Kafka via Confluent **Author:** [M K Pavan Kumar](https://www.linkedin.com/in/kameshwara-pavan-kumar-mantha-91678b21/) , research scholar at [IIITDM, Kurnool](https://iiitk.ac.in). Specialist in hallucination mitigation techniques and RAG methodologies. • [GitHub](https://github.com/pavanjava) • [Medium](https://medium.com/@manthapavankumar11) ## Introduction This guide will walk you through the detailed steps of installing and setting up the [Qdrant Sink Connector](https://github.com/qdrant/qdrant-kafka), building the necessary infrastructure, and creating a practical playground application. By the end of this article, you will have a deep understanding of how to leverage this powerful integration to streamline your data workflows, ultimately enhancing the performance and capabilities of your data-driven real-time semantic search and RAG applications. In this example, original data will be sourced from Azure Blob Storage and MongoDB. ![1.webp](/documentation/examples/data-streaming-kafka-qdrant/1.webp) Figure 1: [Real time Change Data Capture (CDC)](https://www.confluent.io/learn/change-data-capture/) with Kafka and Qdrant. ## The Architecture: ## Source Systems The architecture begins with the **source systems**, represented by MongoDB and Azure Blob Storage. These systems are vital for storing and managing raw data. MongoDB, a popular NoSQL database, is known for its flexibility in handling various data formats and its capability to scale horizontally. It is widely used for applications that require high performance and scalability. Azure Blob Storage, on the other hand, is Microsoft’s object storage solution for the cloud. It is designed for storing massive amounts of unstructured data, such as text or binary data. The data from these sources is extracted using **source connectors**, which are responsible for capturing changes in real-time and streaming them into Kafka. ## Kafka At the heart of this architecture lies **Kafka**, a distributed event streaming platform capable of handling trillions of events a day. Kafka acts as a central hub where data from various sources can be ingested, processed, and distributed to various downstream systems. Its fault-tolerant and scalable design ensures that data can be reliably transmitted and processed in real-time. Kafka’s capability to handle high-throughput, low-latency data streams makes it an ideal choice for real-time data processing and analytics. The use of **Confluent** enhances Kafka’s functionalities, providing additional tools and services for managing Kafka clusters and stream processing. ## Qdrant The processed data is then routed to **Qdrant**, a highly scalable vector search engine designed for similarity searches. Qdrant excels at managing and searching through high-dimensional vector data, which is essential for applications involving machine learning and AI, such as recommendation systems, image recognition, and natural language processing. The **Qdrant Sink Connector** for Kafka plays a pivotal role here, enabling seamless integration between Kafka and Qdrant. This connector allows for the real-time ingestion of vector data into Qdrant, ensuring that the data is always up-to-date and ready for high-performance similarity searches. ## Integration and Pipeline Importance The integration of these components forms a powerful and efficient data streaming pipeline. The **Qdrant Sink Connector** ensures that the data flowing through Kafka is continuously ingested into Qdrant without any manual intervention. This real-time integration is crucial for applications that rely on the most current data for decision-making and analysis. By combining the strengths of MongoDB and Azure Blob Storage for data storage, Kafka for data streaming, and Qdrant for vector search, this pipeline provides a robust solution for managing and processing large volumes of data in real-time. The architecture’s scalability, fault-tolerance, and real-time processing capabilities are key to its effectiveness, making it a versatile solution for modern data-driven applications. ## Installation of Confluent Kafka Platform To install the Confluent Kafka Platform (self-managed locally), follow these 3 simple steps: **Download and Extract the Distribution Files:** - Visit [Confluent Installation Page](https://www.confluent.io/installation/). - Download the distribution files (tar, zip, etc.). - Extract the downloaded file using: ```bash tar -xvf confluent-<version>.tar.gz ``` or ```bash unzip confluent-<version>.zip ``` **Configure Environment Variables:** ```bash # Set CONFLUENT_HOME to the installation directory: export CONFLUENT_HOME=/path/to/confluent-<version> # Add Confluent binaries to your PATH export PATH=$CONFLUENT_HOME/bin:$PATH ``` **Run Confluent Platform Locally:** ```bash # Start the Confluent Platform services: confluent local start # Stop the Confluent Platform services: confluent local stop ``` ## Installation of Qdrant: To install and run Qdrant (self-managed locally), you can use Docker, which simplifies the process. First, ensure you have Docker installed on your system. Then, you can pull the Qdrant image from Docker Hub and run it with the following commands: ```bash docker pull qdrant/qdrant docker run -p 6334:6334 -p 6333:6333 qdrant/qdrant ``` This will download the Qdrant image and start a Qdrant instance accessible at `http://localhost:6333`. For more detailed instructions and alternative installation methods, refer to the [Qdrant installation documentation](https://qdrant.tech/documentation/quick-start/). ## Installation of Qdrant-Kafka Sink Connector: To install the Qdrant Kafka connector using [Confluent Hub](https://www.confluent.io/hub/), you can utilize the straightforward `confluent-hub install` command. This command simplifies the process by eliminating the need for manual configuration file manipulations. To install the Qdrant Kafka connector version 1.1.0, execute the following command in your terminal: ```bash confluent-hub install qdrant/qdrant-kafka:1.1.0 ``` This command downloads and installs the specified connector directly from Confluent Hub into your Confluent Platform or Kafka Connect environment. The installation process ensures that all necessary dependencies are handled automatically, allowing for a seamless integration of the Qdrant Kafka connector with your existing setup. Once installed, the connector can be configured and managed using the Confluent Control Center or the Kafka Connect REST API, enabling efficient data streaming between Kafka and Qdrant without the need for intricate manual setup. ![2.webp](/documentation/examples/data-streaming-kafka-qdrant/2.webp) *Figure 2: Local Confluent platform showing the Source and Sink connectors after installation.* Ensure the configuration of the connector once it's installed as below. keep in mind that your `key.converter` and `value.converter` are very important for kafka to safely deliver the messages from topic to qdrant. ```bash { "name": "QdrantSinkConnectorConnector_0", "config": { "value.converter.schemas.enable": "false", "name": "QdrantSinkConnectorConnector_0", "connector.class": "io.qdrant.kafka.QdrantSinkConnector", "key.converter": "org.apache.kafka.connect.storage.StringConverter", "value.converter": "org.apache.kafka.connect.json.JsonConverter", "topics": "topic_62,qdrant_kafka.docs", "errors.deadletterqueue.topic.name": "dead_queue", "errors.deadletterqueue.topic.replication.factor": "1", "qdrant.grpc.url": "http://localhost:6334", "qdrant.api.key": "************" } } ``` ## Installation of MongoDB For the Kafka to connect MongoDB as source, your MongoDB instance should be running in a `replicaSet` mode. below is the `docker compose` file which will spin a single node `replicaSet` instance of MongoDB. ```bash version: "3.8" services: mongo1: image: mongo:7.0 command: ["--replSet", "rs0", "--bind_ip_all", "--port", "27017"] ports: - 27017:27017 healthcheck: test: echo "try { rs.status() } catch (err) { rs.initiate({_id:'rs0',members:[{_id:0,host:'host.docker.internal:27017'}]}) }" | mongosh --port 27017 --quiet interval: 5s timeout: 30s start_period: 0s start_interval: 1s retries: 30 volumes: - "mongo1_data:/data/db" - "mongo1_config:/data/configdb" volumes: mongo1_data: mongo1_config: ``` Similarly, install and configure source connector as below. ```bash confluent-hub install mongodb/kafka-connect-mongodb:latest ``` After installing the `MongoDB` connector, connector configuration should look like this: ```bash { "name": "MongoSourceConnectorConnector_0", "config": { "connector.class": "com.mongodb.kafka.connect.MongoSourceConnector", "key.converter": "org.apache.kafka.connect.storage.StringConverter", "value.converter": "org.apache.kafka.connect.storage.StringConverter", "connection.uri": "mongodb://127.0.0.1:27017/?replicaSet=rs0&directConnection=true", "database": "qdrant_kafka", "collection": "docs", "publish.full.document.only": "true", "topic.namespace.map": "{\"*\":\"qdrant_kafka.docs\"}", "copy.existing": "true" } } ``` ## Playground Application As the infrastructure set is completely done, now it's time for us to create a simple application and check our setup. the objective of our application is the data is inserted to Mongodb and eventually it will get ingested into Qdrant also using [Change Data Capture (CDC)](https://www.confluent.io/learn/change-data-capture/). `requirements.txt` ```bash fastembed==0.3.1 pymongo==4.8.0 qdrant_client==1.10.1 ``` `project_root_folder/main.py` This is just sample code. Nevertheless it can be extended to millions of operations based on your use case. ```python from pymongo import MongoClient from utils.app_utils import create_qdrant_collection from fastembed import TextEmbedding collection_name: str = 'test' embed_model_name: str = 'snowflake/snowflake-arctic-embed-s' ``` ```python # Step 0: create qdrant_collection create_qdrant_collection(collection_name=collection_name, embed_model=embed_model_name) # Step 1: Connect to MongoDB client = MongoClient('mongodb://127.0.0.1:27017/?replicaSet=rs0&directConnection=true') # Step 2: Select Database db = client['qdrant_kafka'] # Step 3: Select Collection collection = db['docs'] # Step 4: Create a Document to Insert description = "qdrant is a high available vector search engine" embedding_model = TextEmbedding(model_name=embed_model_name) vector = next(embedding_model.embed(documents=description)).tolist() document = { "collection_name": collection_name, "id": 1, "vector": vector, "payload": { "name": "qdrant", "description": description, "url": "https://qdrant.tech/documentation" } } # Step 5: Insert the Document into the Collection result = collection.insert_one(document) # Step 6: Print the Inserted Document's ID print("Inserted document ID:", result.inserted_id) ``` `project_root_folder/utils/app_utils.py` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333", api_key="<YOUR_KEY>") dimension_dict = {"snowflake/snowflake-arctic-embed-s": 384} def create_qdrant_collection(collection_name: str, embed_model: str): if not client.collection_exists(collection_name=collection_name): client.create_collection( collection_name=collection_name, vectors_config=models.VectorParams(size=dimension_dict.get(embed_model), distance=models.Distance.COSINE) ) ``` Before we run the application, below is the state of MongoDB and Qdrant databases. ![3.webp](/documentation/examples/data-streaming-kafka-qdrant/3.webp) Figure 3: Initial state: no collection named `test` & `no data` in the `docs` collection of MongodDB. Once you run the code the data goes into Mongodb and the CDC gets triggered and eventually Qdrant will receive this data. ![4.webp](/documentation/examples/data-streaming-kafka-qdrant/4.webp) Figure 4: The test Qdrant collection is created automatically. ![5.webp](/documentation/examples/data-streaming-kafka-qdrant/5.webp) Figure 5: Data is inserted into both MongoDB and Qdrant. ## Conclusion: In conclusion, the integration of **Kafka** with **Qdrant** using the **Qdrant Sink Connector** provides a seamless and efficient solution for real-time data streaming and processing. This setup not only enhances the capabilities of your data pipeline but also ensures that high-dimensional vector data is continuously indexed and readily available for similarity searches. By following the installation and setup guide, you can easily establish a robust data flow from your **source systems** like **MongoDB** and **Azure Blob Storage**, through **Kafka**, and into **Qdrant**. This architecture empowers modern applications to leverage real-time data insights and advanced search capabilities, paving the way for innovative data-driven solutions.
documentation/send-data/data-streaming-kafka-qdrant.md
--- title: Send Data to Qdrant weight: 18 --- ## How to Send Your Data to a Qdrant Cluster | Example | Description | Stack | |---------------------------------------------------------------------------------|-------------------------------------------------------------------|---------------------------------------------| | [Pinecone to Qdrant Data Transfer](https://githubtocolab.com/qdrant/examples/blob/master/data-migration/from-pinecone-to-qdrant.ipynb) | Migrate your vector data from Pinecone to Qdrant. | Qdrant, Vector-io | | [Stream Data to Qdrant with Kafka](../send-data/data-streaming-kafka-qdrant/) | Use Confluent to Stream Data to Qdrant via Managed Kafka. | Qdrant, Kafka | | [Qdrant on Databricks](../send-data/databricks/) | Learn how to use Qdrant on Databricks using the Spark connector | Qdrant, Databricks, Apache Spark | | [Qdrant with Airflow and Astronomer](../send-data/qdrant-airflow-astronomer/) | Build a semantic querying system using Airflow and Astronomer | Qdrant, Airflow, Astronomer |
documentation/send-data/_index.md
--- title: Snowflake Models weight: 2900 --- # Snowflake Qdrant supports working with [Snowflake](https://www.snowflake.com/blog/introducing-snowflake-arctic-embed-snowflakes-state-of-the-art-text-embedding-family-of-models/) text embedding models. You can find all the available models on [HuggingFace](https://huggingface.co/Snowflake). ### Setting up the Qdrant and Snowflake models ```python from qdrant_client import QdrantClient from fastembed import TextEmbedding qclient = QdrantClient(":memory:") embedding_model = TextEmbedding("snowflake/snowflake-arctic-embed-s") texts = [ "Qdrant is the best vector search engine!", "Loved by Enterprises and everyone building for low latency, high performance, and scale.", ] ``` ```typescript import {QdrantClient} from '@qdrant/js-client-rest'; import { pipeline } from '@xenova/transformers'; const client = new QdrantClient({ url: 'http://localhost:6333' }); const extractor = await pipeline('feature-extraction', 'Snowflake/snowflake-arctic-embed-s'); const texts = [ "Qdrant is the best vector search engine!", "Loved by Enterprises and everyone building for low latency, high performance, and scale.", ] ``` The following example shows how to embed documents with the [`snowflake-arctic-embed-s`](https://huggingface.co/Snowflake/snowflake-arctic-embed-s) model that generates sentence embeddings of size 384. ### Embedding documents ```python embeddings = embedding_model.embed(texts) ``` ```typescript const embeddings = await extractor(texts, { normalize: true, pooling: 'cls' }); ``` ### Converting the model outputs to Qdrant points ```python from qdrant_client.models import PointStruct points = [ PointStruct( id=idx, vector=embedding, payload={"text": text}, ) for idx, (embedding, text) in enumerate(zip(embeddings, texts)) ] ``` ```typescript let points = embeddings.tolist().map((embedding, i) => { return { id: i, vector: embedding, payload: { text: texts[i] } } }); ``` ### Creating a collection to insert the documents ```python from qdrant_client.models import VectorParams, Distance COLLECTION_NAME = "example_collection" qclient.create_collection( COLLECTION_NAME, vectors_config=VectorParams( size=384, distance=Distance.COSINE, ), ) qclient.upsert(COLLECTION_NAME, points) ``` ```typescript const COLLECTION_NAME = "example_collection" await client.createCollection(COLLECTION_NAME, { vectors: { size: 384, distance: 'Cosine', } }); await client.upsert(COLLECTION_NAME, { wait: true, points }); ``` ### Searching for documents with Qdrant Once the documents are added, you can search for the most relevant documents. ```python query_embedding = next(embedding_model.query_embed("What is the best to use for vector search scaling?")) qclient.search( collection_name=COLLECTION_NAME, query_vector=query_embedding, ) ``` ```typescript const query_embedding = await extractor("What is the best to use for vector search scaling?", { normalize: true, pooling: 'cls' }); await client.search(COLLECTION_NAME, { vector: query_embedding.tolist()[0], }); ```
documentation/embeddings/snowflake.md
--- title: Watsonx weight: 3000 aliases: - /documentation/examples/watsonx-search/ - /documentation/tutorials/watsonx-search/ - /documentation/integrations/watsonx/ --- # Using Watsonx with Qdrant Watsonx is IBM's platform for AI embeddings, focusing on enterprise-level text and data analytics. These embeddings are suitable for high-precision vector searches in Qdrant. ## Installation You can install the required package using the following pip command: ```bash pip install watsonx ``` ## Code Example ```python import qdrant_client from qdrant_client.models import Batch from watsonx import Watsonx # Initialize Watsonx AI model model = Watsonx("watsonx-model") # Generate embeddings for enterprise data text = "Watsonx provides enterprise-level NLP solutions." embeddings = model.embed(text) # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name="EnterpriseData", points=Batch( ids=[1], vectors=[embeddings], ) ) ```
documentation/embeddings/watsonx.md
--- title: Instruct weight: 1800 --- # Using Instruct with Qdrant Instruct is a specialized provider offering detailed embeddings for instructional content, which can be effectively used with Qdrant. With Instruct every text input is embedded together with instructions explaining the use case (e.g., task and domain descriptions). Unlike encoders from prior work that are more specialized, INSTRUCTOR is a single embedder that can generate text embeddings tailored to different downstream tasks and domains, without any further training. ## Installation ```bash pip install instruct ``` Below is an example of how to obtain embeddings using Instruct's API and store them in a Qdrant collection: ```python import qdrant_client from qdrant_client.models import Batch from instruct import Instruct # Initialize Instruct model model = Instruct("instruct-base") # Generate embeddings for instructional content text = "Instruct provides detailed embeddings for learning content." embeddings = model.embed(text) # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name="LearningContent", points=Batch( ids=[1], vectors=[embeddings], ) ) ```
documentation/embeddings/instruct.md
--- title: GPT4All weight: 1700 --- # Using GPT4All with Qdrant GPT4All offers a range of large language models that can be fine-tuned for various applications. GPT4All runs large language models (LLMs) privately on everyday desktops & laptops. No API calls or GPUs required - you can just download the application and get started. Use GPT4All in Python to program with LLMs implemented with the llama.cpp backend and Nomic's C backend. ## Installation You can install the required package using the following pip command: ```bash pip install gpt4all ``` Here is how you might connect to GPT4ALL using Qdrant: ```python import qdrant_client from qdrant_client.models import Batch from gpt4all import GPT4All # Initialize GPT4All model model = GPT4All("gpt4all-lora-quantized") # Generate embeddings for a text text = "GPT4All enables open-source AI applications." embeddings = model.embed(text) # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name="OpenSourceAI", points=Batch( ids=[1], vectors=[embeddings], ) ) ```
documentation/embeddings/gpt4all.md
--- title: Voyage AI weight: 3200 --- # Voyage AI Qdrant supports working with [Voyage AI](https://voyageai.com/) embeddings. The supported models' list can be found [here](https://docs.voyageai.com/docs/embeddings). You can generate an API key from the [Voyage AI dashboard](<https://dash.voyageai.com/>) to authenticate the requests. ### Setting up the Qdrant and Voyage clients ```python from qdrant_client import QdrantClient import voyageai VOYAGE_API_KEY = "<YOUR_VOYAGEAI_API_KEY>" qclient = QdrantClient(":memory:") vclient = voyageai.Client(api_key=VOYAGE_API_KEY) texts = [ "Qdrant is the best vector search engine!", "Loved by Enterprises and everyone building for low latency, high performance, and scale.", ] ``` ```typescript import {QdrantClient} from '@qdrant/js-client-rest'; const VOYAGEAI_BASE_URL = "https://api.voyageai.com/v1/embeddings" const VOYAGEAI_API_KEY = "<YOUR_VOYAGEAI_API_KEY>" const client = new QdrantClient({ url: 'http://localhost:6333' }); const headers = { "Authorization": "Bearer " + VOYAGEAI_API_KEY, "Content-Type": "application/json" } const texts = [ "Qdrant is the best vector search engine!", "Loved by Enterprises and everyone building for low latency, high performance, and scale.", ] ``` The following example shows how to embed documents with the [`voyage-large-2`](https://docs.voyageai.com/docs/embeddings#model-choices) model that generates sentence embeddings of size 1536. ### Embedding documents ```python response = vclient.embed(texts, model="voyage-large-2", input_type="document") ``` ```typescript let body = { "input": texts, "model": "voyage-large-2", "input_type": "document", } let response = await fetch(VOYAGEAI_BASE_URL, { method: "POST", body: JSON.stringify(body), headers }); let response_body = await response.json(); ``` ### Converting the model outputs to Qdrant points ```python from qdrant_client.models import PointStruct points = [ PointStruct( id=idx, vector=embedding, payload={"text": text}, ) for idx, (embedding, text) in enumerate(zip(response.embeddings, texts)) ] ``` ```typescript let points = response_body.data.map((data, i) => { return { id: i, vector: data.embedding, payload: { text: texts[i] } } }); ``` ### Creating a collection to insert the documents ```python from qdrant_client.models import VectorParams, Distance COLLECTION_NAME = "example_collection" qclient.create_collection( COLLECTION_NAME, vectors_config=VectorParams( size=1536, distance=Distance.COSINE, ), ) qclient.upsert(COLLECTION_NAME, points) ``` ```typescript const COLLECTION_NAME = "example_collection" await client.createCollection(COLLECTION_NAME, { vectors: { size: 1536, distance: 'Cosine', } }); await client.upsert(COLLECTION_NAME, { wait: true, points }); ``` ### Searching for documents with Qdrant Once the documents are added, you can search for the most relevant documents. ```python response = vclient.embed( ["What is the best to use for vector search scaling?"], model="voyage-large-2", input_type="query", ) qclient.search( collection_name=COLLECTION_NAME, query_vector=response.embeddings[0], ) ``` ```typescript body = { "input": ["What is the best to use for vector search scaling?"], "model": "voyage-large-2", "input_type": "query", }; response = await fetch(VOYAGEAI_BASE_URL, { method: "POST", body: JSON.stringify(body), headers }); response_body = await response.json(); await client.search(COLLECTION_NAME, { vector: response_body.data[0].embedding, }); ```
documentation/embeddings/voyage.md
--- title: Together AI weight: 3000 --- # Using Together AI with Qdrant Together AI focuses on collaborative AI embeddings that enhance multi-user search scenarios when integrated with Qdrant. ## Installation You can install the required package using the following pip command: ```bash pip install togetherai ``` ## Integration Example ```python import qdrant_client from qdrant_client.models import Batch from togetherai import TogetherAI # Initialize Together AI model model = TogetherAI("togetherai-collab") # Generate embeddings for collaborative content text = "Together AI enhances collaborative content search." embeddings = model.embed(text) # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name="CollaborativeContent", points=Batch( ids=[1], vectors=[embeddings], ) ) ```
documentation/embeddings/togetherai.md
--- title: OpenAI weight: 2700 aliases: [ ../integrations/openai/ ] --- # OpenAI Qdrant supports working with [OpenAI embeddings](https://platform.openai.com/docs/guides/embeddings/embeddings). There is an official OpenAI Python package that simplifies obtaining them, and it can be installed with pip: ```bash pip install openai ``` ### Setting up the OpenAI and Qdrant clients ```python import openai import qdrant_client openai_client = openai.Client( api_key="<YOUR_API_KEY>" ) client = qdrant_client.QdrantClient(":memory:") texts = [ "Qdrant is the best vector search engine!", "Loved by Enterprises and everyone building for low latency, high performance, and scale.", ] ``` The following example shows how to embed a document with the `text-embedding-3-small` model that generates sentence embeddings of size 1536. You can find the list of all supported models [here](https://platform.openai.com/docs/models/embeddings). ### Embedding a document ```python embedding_model = "text-embedding-3-small" result = openai_client.embeddings.create(input=texts, model=embedding_model) ``` ### Converting the model outputs to Qdrant points ```python from qdrant_client.models import PointStruct points = [ PointStruct( id=idx, vector=data.embedding, payload={"text": text}, ) for idx, (data, text) in enumerate(zip(result.data, texts)) ] ``` ### Creating a collection to insert the documents ```python from qdrant_client.models import VectorParams, Distance collection_name = "example_collection" client.create_collection( collection_name, vectors_config=VectorParams( size=1536, distance=Distance.COSINE, ), ) client.upsert(collection_name, points) ``` ## Searching for documents with Qdrant Once the documents are indexed, you can search for the most relevant documents using the same model. ```python client.search( collection_name=collection_name, query_vector=openai_client.embeddings.create( input=["What is the best to use for vector search scaling?"], model=embedding_model, ) .data[0] .embedding, ) ``` ## Using OpenAI Embedding Models with Qdrant's Binary Quantization You can use OpenAI embedding Models with [Binary Quantization](/articles/binary-quantization/) - a technique that allows you to reduce the size of the embeddings by 32 times without losing the quality of the search results too much. |Method|Dimensionality|Test Dataset|Recall|Oversampling| |-|-|-|-|-| |OpenAI text-embedding-3-large|3072|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-3072-1M) | 0.9966|3x| |OpenAI text-embedding-3-small|1536|[DBpedia 100K](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-small-1536-100K)| 0.9847|3x| |OpenAI text-embedding-3-large|1536|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M)| 0.9826|3x| |OpenAI text-embedding-ada-002|1536|[DbPedia 1M](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) |0.98|4x|
documentation/embeddings/openai.md
--- title: AWS Bedrock weight: 1000 --- # Bedrock Embeddings You can use [AWS Bedrock](https://aws.amazon.com/bedrock/) with Qdrant. AWS Bedrock supports multiple [embedding model providers](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html). You'll need the following information from your AWS account: - Region - Access key ID - Secret key To configure your credentials, review the following AWS article: [How do I create an AWS access key](https://repost.aws/knowledge-center/create-access-key). With the following code sample, you can generate embeddings using the [Titan Embeddings G1 - Text model](https://docs.aws.amazon.com/bedrock/latest/userguide/titan-embedding-models.html) which produces sentence embeddings of size 1536. ```python # Install the required dependencies # pip install boto3 qdrant_client import json import boto3 from qdrant_client import QdrantClient, models session = boto3.Session() bedrock_client = session.client( "bedrock-runtime", region_name="<YOUR_AWS_REGION>", aws_access_key_id="<YOUR_AWS_ACCESS_KEY_ID>", aws_secret_access_key="<YOUR_AWS_SECRET_KEY>", ) qdrant_client = QdrantClient(url="http://localhost:6333") qdrant_client.create_collection( "{collection_name}", vectors_config=models.VectorParams(size=1536, distance=models.Distance.COSINE), ) body = json.dumps({"inputText": "Some text to generate embeddings for"}) response = bedrock_client.invoke_model( body=body, modelId="amazon.titan-embed-text-v1", accept="application/json", contentType="application/json", ) response_body = json.loads(response.get("body").read()) qdrant_client.upsert( "{collection_name}", points=[models.PointStruct(id=1, vector=response_body["embedding"])], ) ``` ```javascript // Install the required dependencies // npm install @aws-sdk/client-bedrock-runtime @qdrant/js-client-rest import { BedrockRuntimeClient, InvokeModelCommand, } from "@aws-sdk/client-bedrock-runtime"; import { QdrantClient } from '@qdrant/js-client-rest'; const main = async () => { const bedrockClient = new BedrockRuntimeClient({ region: "<YOUR_AWS_REGION>", credentials: { accessKeyId: "<YOUR_AWS_ACCESS_KEY_ID>",, secretAccessKey: "<YOUR_AWS_SECRET_KEY>", }, }); const qdrantClient = new QdrantClient({ url: 'http://localhost:6333' }); await qdrantClient.createCollection("{collection_name}", { vectors: { size: 1536, distance: 'Cosine', } }); const response = await bedrockClient.send( new InvokeModelCommand({ modelId: "amazon.titan-embed-text-v1", body: JSON.stringify({ inputText: "Some text to generate embeddings for", }), contentType: "application/json", accept: "application/json", }) ); const body = new TextDecoder().decode(response.body); await qdrantClient.upsert("{collection_name}", { points: [ { id: 1, vector: JSON.parse(body).embedding, }, ], }); } main(); ```
documentation/embeddings/bedrock.md
--- title: Aleph Alpha weight: 900 aliases: - /documentation/examples/aleph-alpha-search/ - /documentation/tutorials/aleph-alpha-search/ - /documentation/integrations/aleph-alpha/ --- # Using Aleph Alpha Embeddings with Qdrant Aleph Alpha is a multimodal and multilingual embeddings' provider. Their API allows creating the embeddings for text and images, both in the same latent space. They maintain an [official Python client](https://github.com/Aleph-Alpha/aleph-alpha-client) that might be installed with pip: ```bash pip install aleph-alpha-client ``` There is both synchronous and asynchronous client available. Obtaining the embeddings for an image and storing it into Qdrant might be done in the following way: ```python import qdrant_client from qdrant_client.models import Batch from aleph_alpha_client import ( Prompt, AsyncClient, SemanticEmbeddingRequest, SemanticRepresentation, ImagePrompt ) aa_token = "<< your_token >>" model = "luminous-base" qdrant_client = qdrant_client.QdrantClient() async with AsyncClient(token=aa_token) as client: prompt = ImagePrompt.from_file("./path/to/the/image.jpg") prompt = Prompt.from_image(prompt) query_params = { "prompt": prompt, "representation": SemanticRepresentation.Symmetric, "compress_to_size": 128, } query_request = SemanticEmbeddingRequest(**query_params) query_response = await client.semantic_embed( request=query_request, model=model ) qdrant_client.upsert( collection_name="MyCollection", points=Batch( ids=[1], vectors=[query_response.embedding], ) ) ``` If we wanted to create text embeddings with the same model, we wouldn't use `ImagePrompt.from_file`, but simply provide the input text into the `Prompt.from_text` method.
documentation/embeddings/aleph-alpha.md
--- title: Ollama weight: 2600 --- # Using Ollama with Qdrant Ollama provides specialized embeddings for niche applications. Ollama supports a variety of embedding models, making it possible to build retrieval augmented generation (RAG) applications that combine text prompts with existing documents or other data in specialized areas. ## Installation You can install the required package using the following pip command: ```bash pip install ollama ``` ## Integration Example ```python import qdrant_client from qdrant_client.models import Batch from ollama import Ollama # Initialize Ollama model model = Ollama("ollama-unique") # Generate embeddings for niche applications text = "Ollama excels in niche applications with specific embeddings." embeddings = model.embed(text) # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name="NicheApplications", points=Batch( ids=[1], vectors=[embeddings], ) ) ```
documentation/embeddings/ollama.md
--- title: OpenCLIP weight: 2750 --- # Using OpenCLIP with Qdrant OpenCLIP is an open-source implementation of the CLIP model, allowing for open source generation of multimodal embeddings that link text and images. ```python import qdrant_client from qdrant_client.models import Batch import open_clip # Load the OpenCLIP model and tokenizer model, preprocess = open_clip.create_model_and_transforms('ViT-B-32', pretrained='openai') tokenizer = open_clip.get_tokenizer('ViT-B-32') # Generate embeddings for a text text = "A photo of a cat" text_inputs = tokenizer([text]) with torch.no_grad(): text_features = model.encode_text(text_inputs) # Convert tensor to a list embeddings = text_features[0].cpu().numpy().tolist() # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name="OpenCLIPEmbeddings", points=Batch( ids=[1], vectors=[embeddings], ) ) ```
documentation/embeddings/openclip.md
--- title: Databricks Embeddings weight: 1500 --- # Using Databricks Embeddings with Qdrant Databricks offers an advanced platform for generating embeddings, especially within large-scale data environments. You can use the following Python code to integrate Databricks-generated embeddings with Qdrant. ```python import qdrant_client from qdrant_client.models import Batch from databricks import sql # Connect to Databricks SQL endpoint connection = sql.connect(server_hostname='your_hostname', http_path='your_http_path', access_token='your_access_token') # Execute a query to get embeddings query = "SELECT embedding FROM your_table WHERE id = 1" cursor = connection.cursor() cursor.execute(query) embedding = cursor.fetchone()[0] # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name="DatabricksEmbeddings", points=Batch( ids=[1], # Unique ID for the data point vectors=[embedding], # Embedding fetched from Databricks ) ) ```
documentation/embeddings/databricks.md
--- title: Cohere weight: 1400 aliases: [ ../integrations/cohere/ ] --- # Cohere Qdrant is compatible with Cohere [co.embed API](https://docs.cohere.ai/reference/embed) and its official Python SDK that might be installed as any other package: ```bash pip install cohere ``` The embeddings returned by co.embed API might be used directly in the Qdrant client's calls: ```python import cohere import qdrant_client from qdrant_client.models import Batch cohere_client = cohere.Client("<< your_api_key >>") qdrant_client = qdrant_client.QdrantClient() qdrant_client.upsert( collection_name="MyCollection", points=Batch( ids=[1], vectors=cohere_client.embed( model="large", texts=["The best vector database"], ).embeddings, ), ) ``` If you are interested in seeing an end-to-end project created with co.embed API and Qdrant, please check out the "[Question Answering as a Service with Cohere and Qdrant](/articles/qa-with-cohere-and-qdrant/)" article. ## Embed v3 Embed v3 is a new family of Cohere models, released in November 2023. The new models require passing an additional parameter to the API call: `input_type`. It determines the type of task you want to use the embeddings for. - `input_type="search_document"` - for documents to store in Qdrant - `input_type="search_query"` - for search queries to find the most relevant documents - `input_type="classification"` - for classification tasks - `input_type="clustering"` - for text clustering While implementing semantic search applications, such as RAG, you should use `input_type="search_document"` for the indexed documents and `input_type="search_query"` for the search queries. The following example shows how to index documents with the Embed v3 model: ```python import cohere import qdrant_client from qdrant_client.models import Batch cohere_client = cohere.Client("<< your_api_key >>") client = qdrant_client.QdrantClient() client.upsert( collection_name="MyCollection", points=Batch( ids=[1], vectors=cohere_client.embed( model="embed-english-v3.0", # New Embed v3 model input_type="search_document", # Input type for documents texts=["Qdrant is the a vector database written in Rust"], ).embeddings, ), ) ``` Once the documents are indexed, you can search for the most relevant documents using the Embed v3 model: ```python client.search( collection_name="MyCollection", query_vector=cohere_client.embed( model="embed-english-v3.0", # New Embed v3 model input_type="search_query", # Input type for search queries texts=["The best vector database"], ).embeddings[0], ) ``` <aside role="status"> According to Cohere's documentation, all v3 models can use dot product, cosine similarity, and Euclidean distance as the similarity metric, as all metrics return identical rankings. </aside>
documentation/embeddings/cohere.md
--- title: Clip weight: 1300 --- # Using Clip with Qdrant CLIP (Contrastive Language-Image Pre-Training) provides advanced AI capabilities including natural language processing and computer vision. CLIP is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. ## Installation You can install the required package using the following pip command: ```bash pip install clip-client ``` ## Integration Example ```python import qdrant_client from qdrant_client.models import Batch from transformers import CLIPProcessor, CLIPModel from PIL import Image # Load the CLIP model and processor model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") # Load and process the image image = Image.open("path/to/image.jpg") inputs = processor(images=image, return_tensors="pt") # Generate embeddings with torch.no_grad(): embeddings = model.get_image_features(**inputs).numpy().tolist() # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name="ImageEmbeddings", points=Batch( ids=[1], vectors=embeddings, ) ) ```
documentation/embeddings/clip.md
--- title: Clarifai weight: 1200 --- # Using Clarifai Embeddings with Qdrant Clarifai is a leading provider of visual embeddings, which are particularly strong in image and video analysis. Clarifai offers an API that allows you to create embeddings for various media types, which can be integrated into Qdrant for efficient vector search and retrieval. You can install the Clarifai Python client with pip: ```bash pip install clarifai-client ``` ## Integration Example ```python import qdrant_client from qdrant_client.models import Batch from clarifai.rest import ClarifaiApp # Initialize Clarifai client clarifai_app = ClarifaiApp(api_key="<< your_api_key >>") # Choose the model for embeddings model = clarifai_app.public_models.general_embedding_model # Upload and get embeddings for an image image_path = "./path/to/the/image.jpg" response = model.predict_by_filename(image_path) # Extract the embedding from the response embedding = response['outputs'][0]['data']['embeddings'][0]['vector'] # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient() # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name="MyCollection", points=Batch( ids=[1], vectors=[embedding], ) ) ```
documentation/embeddings/clarifai.md
--- title: Mistral weight: 2100 --- | Time: 10 min | Level: Beginner | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/qdrant/examples/blob/mistral-getting-started/mistral-embed-getting-started/mistral_qdrant_getting_started.ipynb) | | --- | ----------- | ----------- | # Mistral Qdrant is compatible with the new released Mistral Embed and its official Python SDK that can be installed as any other package: ## Setup ### Install the client ```bash pip install mistralai ``` And then we set this up: ```python from mistralai.client import MistralClient from qdrant_client import QdrantClient from qdrant_client.models import PointStruct, VectorParams, Distance collection_name = "example_collection" MISTRAL_API_KEY = "your_mistral_api_key" client = QdrantClient(":memory:") mistral_client = MistralClient(api_key=MISTRAL_API_KEY) texts = [ "Qdrant is the best vector search engine!", "Loved by Enterprises and everyone building for low latency, high performance, and scale.", ] ``` Let's see how to use the Embedding Model API to embed a document for retrieval. The following example shows how to embed a document with the `models/embedding-001` with the `retrieval_document` task type: ## Embedding a document ```python result = mistral_client.embeddings( model="mistral-embed", input=texts, ) ``` The returned result has a data field with a key: `embedding`. The value of this key is a list of floats representing the embedding of the document. ### Converting this into Qdrant Points ```python points = [ PointStruct( id=idx, vector=response.embedding, payload={"text": text}, ) for idx, (response, text) in enumerate(zip(result.data, texts)) ] ``` ## Create a collection and Insert the documents ```python client.create_collection(collection_name, vectors_config=VectorParams( size=1024, distance=Distance.COSINE, ) ) client.upsert(collection_name, points) ``` ## Searching for documents with Qdrant Once the documents are indexed, you can search for the most relevant documents using the same model with the `retrieval_query` task type: ```python client.search( collection_name=collection_name, query_vector=mistral_client.embeddings( model="mistral-embed", input=["What is the best to use for vector search scaling?"] ).data[0].embedding, ) ``` ## Using Mistral Embedding Models with Binary Quantization You can use Mistral Embedding Models with [Binary Quantization](/articles/binary-quantization/) - a technique that allows you to reduce the size of the embeddings by 32 times without losing the quality of the search results too much. At an oversampling of 3 and a limit of 100, we've a 95% recall against the exact nearest neighbors with rescore enabled. | Oversampling | | 1 | 1 | 2 | 2 | 3 | 3 | |--------------|---------|----------|----------|----------|----------|----------|--------------| | | **Rescore** | False | True | False | True | False | True | | **Limit** | | | | | | | | | 10 | | 0.53444 | 0.857778 | 0.534444 | 0.918889 | 0.533333 | 0.941111 | | 20 | | 0.508333 | 0.837778 | 0.508333 | 0.903889 | 0.508333 | 0.927778 | | 50 | | 0.492222 | 0.834444 | 0.492222 | 0.903556 | 0.492889 | 0.940889 | | 100 | | 0.499111 | 0.845444 | 0.498556 | 0.918333 | 0.497667 | **0.944556** | That's it! You can now use Mistral Embedding Models with Qdrant!
documentation/embeddings/mistral.md
--- title: "Nomic" weight: 2300 --- # Nomic The `nomic-embed-text-v1` model is an open source [8192 context length](https://github.com/nomic-ai/contrastors) text encoder. While you can find it on the [Hugging Face Hub](https://huggingface.co/nomic-ai/nomic-embed-text-v1), you may find it easier to obtain them through the [Nomic Text Embeddings](https://docs.nomic.ai/reference/endpoints/nomic-embed-text). Once installed, you can configure it with the official Python client, FastEmbed or through direct HTTP requests. <aside role="status">Using Nomic Embeddings via the Nomic API/SDK requires configuring the <a href="https://atlas.nomic.ai/cli-login">Nomic API token</a>.</aside> You can use Nomic embeddings directly in Qdrant client calls. There is a difference in the way the embeddings are obtained for documents and queries. #### Upsert using [Nomic SDK](https://github.com/nomic-ai/nomic) The `task_type` parameter defines the embeddings that you get. For documents, set the `task_type` to `search_document`: ```python from qdrant_client import QdrantClient, models from nomic import embed output = embed.text( texts=["Qdrant is the best vector database!"], model="nomic-embed-text-v1", task_type="search_document", ) client = QdrantClient() client.upsert( collection_name="my-collection", points=models.Batch( ids=[1], vectors=output["embeddings"], ), ) ``` #### Upsert using [FastEmbed](https://github.com/qdrant/fastembed) ```python from fastembed import TextEmbedding from client import QdrantClient, models model = TextEmbedding("nomic-ai/nomic-embed-text-v1") output = model.embed(["Qdrant is the best vector database!"]) client = QdrantClient() client.upsert( collection_name="my-collection", points=models.Batch( ids=[1], vectors=[embeddings.tolist() for embeddings in output], ), ) ``` #### Search using [Nomic SDK](https://github.com/nomic-ai/nomic) To query the collection, set the `task_type` to `search_query`: ```python output = embed.text( texts=["What is the best vector database?"], model="nomic-embed-text-v1", task_type="search_query", ) client.search( collection_name="my-collection", query_vector=output["embeddings"][0], ) ``` #### Search using [FastEmbed](https://github.com/qdrant/fastembed) ```python output = next(model.embed("What is the best vector database?")) client.search( collection_name="my-collection", query_vector=output.tolist(), ) ``` For more information, see the Nomic documentation on [Text embeddings](https://docs.nomic.ai/reference/endpoints/nomic-embed-text).
documentation/embeddings/nomic.md
--- title: Nvidia weight: 2400 --- # Nvidia Qdrant supports working with [Nvidia embeddings](https://build.nvidia.com/explore/retrieval). You can generate an API key to authenticate the requests from the [Nvidia Playground](<https://build.nvidia.com/nvidia/embed-qa-4>). ### Setting up the Qdrant client and Nvidia session ```python import requests from qdrant_client import QdrantClient NVIDIA_BASE_URL = "https://ai.api.nvidia.com/v1/retrieval/nvidia/embeddings" NVIDIA_API_KEY = "<YOUR_API_KEY>" nvidia_session = requests.Session() client = QdrantClient(":memory:") headers = { "Authorization": f"Bearer {NVIDIA_API_KEY}", "Accept": "application/json", } texts = [ "Qdrant is the best vector search engine!", "Loved by Enterprises and everyone building for low latency, high performance, and scale.", ] ``` ```typescript import { QdrantClient } from '@qdrant/js-client-rest'; const NVIDIA_BASE_URL = "https://ai.api.nvidia.com/v1/retrieval/nvidia/embeddings" const NVIDIA_API_KEY = "<YOUR_API_KEY>" const client = new QdrantClient({ url: 'http://localhost:6333' }); const headers = { "Authorization": "Bearer " + NVIDIA_API_KEY, "Accept": "application/json", "Content-Type": "application/json" } const texts = [ "Qdrant is the best vector search engine!", "Loved by Enterprises and everyone building for low latency, high performance, and scale.", ] ``` The following example shows how to embed documents with the `embed-qa-4` model that generates sentence embeddings of size 1024. ### Embedding documents ```python payload = { "input": texts, "input_type": "passage", "model": "NV-Embed-QA", } response_body = nvidia_session.post( NVIDIA_BASE_URL, headers=headers, json=payload ).json() ``` ```typescript let body = { "input": texts, "input_type": "passage", "model": "NV-Embed-QA" } let response = await fetch(NVIDIA_BASE_URL, { method: "POST", body: JSON.stringify(body), headers }); let response_body = await response.json() ``` ### Converting the model outputs to Qdrant points ```python from qdrant_client.models import PointStruct points = [ PointStruct( id=idx, vector=data["embedding"], payload={"text": text}, ) for idx, (data, text) in enumerate(zip(response_body["data"], texts)) ] ``` ```typescript let points = response_body.data.map((data, i) => { return { id: i, vector: data.embedding, payload: { text: texts[i] } } }) ``` ### Creating a collection to insert the documents ```python from qdrant_client.models import VectorParams, Distance collection_name = "example_collection" client.create_collection( collection_name, vectors_config=VectorParams( size=1024, distance=Distance.COSINE, ), ) client.upsert(collection_name, points) ``` ```typescript const COLLECTION_NAME = "example_collection" await client.createCollection(COLLECTION_NAME, { vectors: { size: 1024, distance: 'Cosine', } }); await client.upsert(COLLECTION_NAME, { wait: true, points }) ``` ## Searching for documents with Qdrant Once the documents are added, you can search for the most relevant documents. ```python payload = { "input": "What is the best to use for vector search scaling?", "input_type": "query", "model": "NV-Embed-QA", } response_body = nvidia_session.post( NVIDIA_BASE_URL, headers=headers, json=payload ).json() client.search( collection_name=collection_name, query_vector=response_body["data"][0]["embedding"], ) ``` ```typescript body = { "input": "What is the best to use for vector search scaling?", "input_type": "query", "model": "NV-Embed-QA", } response = await fetch(NVIDIA_BASE_URL, { method: "POST", body: JSON.stringify(body), headers }); response_body = await response.json() await client.search(COLLECTION_NAME, { vector: response_body.data[0].embedding, }); ```
documentation/embeddings/nvidia.md
--- title: Prem AI weight: 2800 --- # Prem AI [PremAI](https://premai.io/) is a unified generative AI development platform for fine-tuning deploying, and monitoring AI models. Qdrant is compatible with PremAI APIs. ### Installing the SDKs ```bash pip install premai qdrant-client ``` To install the npm package: ```bash npm install @premai/prem-sdk @qdrant/js-client-rest ``` ### Import all required packages ```python from premai import Prem from qdrant_client import QdrantClient from qdrant_client.models import Distance, VectorParams ``` ```typescript import Prem from '@premai/prem-sdk'; import { QdrantClient } from '@qdrant/js-client-rest'; ``` ### Define all the constants We need to define the project ID and the embedding model to use. You can learn more about obtaining these in the PremAI [docs](https://docs.premai.io/quick-start). ```python PROJECT_ID = 123 EMBEDDING_MODEL = "text-embedding-3-large" COLLECTION_NAME = "prem-collection-py" QDRANT_SERVER_URL = "http://localhost:6333" DOCUMENTS = [ "This is a sample python document", "We will be using qdrant and premai python sdk" ] ``` ```typescript const PROJECT_ID = 123; const EMBEDDING_MODEL = "text-embedding-3-large"; const COLLECTION_NAME = "prem-collection-js"; const SERVER_URL = "http://localhost:6333" const DOCUMENTS = [ "This is a sample javascript document", "We will be using qdrant and premai javascript sdk" ]; ``` ### Set up PremAI and Qdrant clients ```python prem_client = Prem(api_key="xxxx-xxx-xxx") qdrant_client = QdrantClient(url=QDRANT_SERVER_URL) ``` ```typescript const premaiClient = new Prem({ apiKey: "xxxx-xxx-xxx" }) const qdrantClient = new QdrantClient({ url: SERVER_URL }); ``` ### Generating Embeddings ```python from typing import Union, List def get_embeddings( project_id: int, embedding_model: str, documents: Union[str, List[str]] ) -> List[List[float]]: """ Helper function to get the embeddings from premai sdk Args project_id (int): The project id from prem saas platform. embedding_model (str): The embedding model alias to choose documents (Union[str, List[str]]): Single texts or list of texts to embed Returns: List[List[int]]: A list of list of integers that represents different embeddings """ embeddings = [] documents = [documents] if isinstance(documents, str) else documents for embedding in prem_client.embeddings.create( project_id=project_id, model=embedding_model, input=documents ).data: embeddings.append(embedding.embedding) return embeddings ``` ```typescript async function getEmbeddings(projectID, embeddingModel, documents) { const response = await premaiClient.embeddings.create({ project_id: projectID, model: embeddingModel, input: documents }); return response; } ``` ### Converting Embeddings to Qdrant Points ```python from qdrant_client.models import PointStruct embeddings = get_embeddings( project_id=PROJECT_ID, embedding_model=EMBEDDING_MODEL, documents=DOCUMENTS ) points = [ PointStruct( id=idx, vector=embedding, payload={"text": text}, ) for idx, (embedding, text) in enumerate(zip(embeddings, DOCUMENTS)) ] ``` ```typescript function convertToQdrantPoints(embeddings, texts) { return embeddings.data.map((data, i) => { return { id: i, vector: data.embedding, payload: { text: texts[i] } }; }); } const embeddings = await getEmbeddings(PROJECT_ID, EMBEDDING_MODEL, DOCUMENTS); const points = convertToQdrantPoints(embeddings, DOCUMENTS); ``` ### Set up a Qdrant Collection ```python qdrant_client.create_collection( collection_name=COLLECTION_NAME, vectors_config=VectorParams(size=3072, distance=Distance.DOT) ) ``` ```typescript await qdrantClient.createCollection(COLLECTION_NAME, { vectors: { size: 3072, distance: 'Cosine' } }) ``` ### Insert Documents into the Collection ```python doc_ids = list(range(len(embeddings))) qdrant_client.upsert( collection_name=COLLECTION_NAME, points=points ) ``` ```typescript await qdrantClient.upsert(COLLECTION_NAME, { wait: true, points }); ``` ### Perform a Search ```python query = "what is the extension of python document" query_embedding = get_embeddings( project_id=PROJECT_ID, embedding_model=EMBEDDING_MODEL, documents=query ) qdrant_client.search(collection_name=COLLECTION_NAME, query_vector=query_embedding[0]) ``` ```typescript const query = "what is the extension of javascript document" const query_embedding_response = await getEmbeddings(PROJECT_ID, EMBEDDING_MODEL, query) await qdrantClient.search(COLLECTION_NAME, { vector: query_embedding_response.data[0].embedding }); ```
documentation/embeddings/premai.md
--- title: GradientAI weight: 1750 --- # Using GradientAI with Qdrant GradientAI provides state-of-the-art models for generating embeddings, which are highly effective for vector search tasks in Qdrant. ## Installation You can install the required packages using the following pip command: ```bash pip install gradientai python-dotenv qdrant-client ``` ## Code Example ```python from dotenv import load_dotenv import qdrant_client from qdrant_client.models import Batch from gradientai import Gradient load_dotenv() def main() -> None: # Initialize GradientAI client gradient = Gradient() # Retrieve the embeddings model embeddings_model = gradient.get_embeddings_model(slug="bge-large") # Generate embeddings for your data generate_embeddings_response = embeddings_model.generate_embeddings( inputs=[ "Multimodal brain MRI is the preferred method to evaluate for acute ischemic infarct and ideally should be obtained within 24 hours of symptom onset, and in most centers will follow a NCCT", "CTA has a higher sensitivity and positive predictive value than magnetic resonance angiography (MRA) for detection of intracranial stenosis and occlusion and is recommended over time-of-flight (without contrast) MRA", "Echocardiographic strain imaging has the advantage of detecting early cardiac involvement, even before thickened walls or symptoms are apparent", ], ) # Initialize Qdrant client client = qdrant_client.QdrantClient(url="http://localhost:6333") # Upsert the embeddings into Qdrant for i, embedding in enumerate(generate_embeddings_response.embeddings): client.upsert( collection_name="MedicalRecords", points=Batch( ids=[i + 1], # Unique ID for each embedding vectors=[embedding.embedding], ) ) print("Embeddings successfully upserted into Qdrant.") gradient.close() if __name__ == "__main__": main() ```
documentation/embeddings/gradientai.md
--- title: Gemini weight: 1600 --- | Time: 10 min | Level: Beginner | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/qdrant/examples/blob/gemini-getting-started/gemini-getting-started/gemini-getting-started.ipynb) | | --- | ----------- | ----------- | # Gemini Qdrant is compatible with Gemini Embedding Model API and its official Python SDK that can be installed as any other package: Gemini is a new family of Google PaLM models, released in December 2023. The new embedding models succeed the previous Gecko Embedding Model. In the latest models, an additional parameter, `task_type`, can be passed to the API call. This parameter serves to designate the intended purpose for the embeddings utilized. The Embedding Model API supports various task types, outlined as follows: 1. `retrieval_query`: query in a search/retrieval setting 2. `retrieval_document`: document from the corpus being searched 3. `semantic_similarity`: semantic text similarity 4. `classification`: embeddings to be used for text classification 5. `clustering`: the generated embeddings will be used for clustering 6. `task_type_unspecified`: Unset value, which will default to one of the other values. If you're building a semantic search application, such as RAG, you should use `task_type="retrieval_document"` for the indexed documents and `task_type="retrieval_query"` for the search queries. The following example shows how to do this with Qdrant: ## Setup ```bash pip install google-generativeai ``` Let's see how to use the Embedding Model API to embed a document for retrieval. The following example shows how to embed a document with the `models/embedding-001` with the `retrieval_document` task type: ## Embedding a document ```python import google.generativeai as gemini_client from qdrant_client import QdrantClient from qdrant_client.models import Distance, PointStruct, VectorParams collection_name = "example_collection" GEMINI_API_KEY = "YOUR GEMINI API KEY" # add your key here client = QdrantClient(url="http://localhost:6333") gemini_client.configure(api_key=GEMINI_API_KEY) texts = [ "Qdrant is a vector database that is compatible with Gemini.", "Gemini is a new family of Google PaLM models, released in December 2023.", ] results = [ gemini_client.embed_content( model="models/embedding-001", content=sentence, task_type="retrieval_document", title="Qdrant x Gemini", ) for sentence in texts ] ``` ## Creating Qdrant Points and Indexing documents with Qdrant ### Creating Qdrant Points ```python points = [ PointStruct( id=idx, vector=response['embedding'], payload={"text": text}, ) for idx, (response, text) in enumerate(zip(results, texts)) ] ``` ### Create Collection ```python client.create_collection(collection_name, vectors_config= VectorParams( size=768, distance=Distance.COSINE, ) ) ``` ### Add these into the collection ```python client.upsert(collection_name, points) ``` ## Searching for documents with Qdrant Once the documents are indexed, you can search for the most relevant documents using the same model with the `retrieval_query` task type: ```python client.search( collection_name=collection_name, query_vector=gemini_client.embed_content( model="models/embedding-001", content="Is Qdrant compatible with Gemini?", task_type="retrieval_query", )["embedding"], ) ``` ## Using Gemini Embedding Models with Binary Quantization You can use Gemini Embedding Models with [Binary Quantization](/articles/binary-quantization/) - a technique that allows you to reduce the size of the embeddings by 32 times without losing the quality of the search results too much. In this table, you can see the results of the search with the `models/embedding-001` model with Binary Quantization in comparison with the original model: At an oversampling of 3 and a limit of 100, we've a 95% recall against the exact nearest neighbors with rescore enabled. | Oversampling | | 1 | 1 | 2 | 2 | 3 | 3 | |--------------|---------|----------|----------|----------|----------|----------|----------| | | **Rescore** | False | True | False | True | False | True | | **Limit** | | | | | | | | | 10 | | 0.523333 | 0.831111 | 0.523333 | 0.915556 | 0.523333 | 0.950000 | | 20 | | 0.510000 | 0.836667 | 0.510000 | 0.912222 | 0.510000 | 0.937778 | | 50 | | 0.489111 | 0.841556 | 0.489111 | 0.913333 | 0.488444 | 0.947111 | | 100 | | 0.485778 | 0.846556 | 0.485556 | 0.929000 | 0.486000 | **0.956333** | That's it! You can now use Gemini Embedding Models with Qdrant!
documentation/embeddings/gemini.md
--- title: OCI (Oracle Cloud Infrastructure) weight: 2500 --- # Using OCI (Oracle Cloud Infrastructure) with Qdrant OCI provides robust cloud-based embeddings for various media types. The Generative AI Embedding Models convert textual input - ranging from phrases and sentences to entire paragraphs - into a structured format known as embeddings. Each piece of text input is transformed into a numerical array consisting of 1024 distinct numbers. ## Installation You can install the required package using the following pip command: ```bash pip install oci ``` ## Code Example Below is an example of how to obtain embeddings using OCI (Oracle Cloud Infrastructure)'s API and store them in a Qdrant collection: ```python import qdrant_client from qdrant_client.models import Batch import oci # Initialize OCI client config = oci.config.from_file() ai_client = oci.ai_language.AIServiceLanguageClient(config) # Generate embeddings using OCI's AI service text = "OCI provides cloud-based AI services." response = ai_client.batch_detect_language_entities(text) embeddings = response.data[0].entities[0].embedding # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name="CloudAI", points=Batch( ids=[1], vectors=[embeddings], ) ) ```
documentation/embeddings/oci.md
--- title: Jina Embeddings weight: 1900 aliases: - /documentation/embeddings/jina-emebddngs/ - ../integrations/jina-embeddings/ --- # Jina Embeddings Qdrant can also easily work with [Jina embeddings](https://jina.ai/embeddings/) which allow for model input lengths of up to 8192 tokens. To call their endpoint, all you need is an API key obtainable [here](https://jina.ai/embeddings/). By the way, our friends from **Jina AI** provided us with a code (**QDRANT**) that will grant you a **10% discount** if you plan to use Jina Embeddings in production. ```python import qdrant_client import requests from qdrant_client.models import Distance, VectorParams, Batch # Provide Jina API key and choose one of the available models. # You can get a free trial key here: https://jina.ai/embeddings/ JINA_API_KEY = "jina_xxxxxxxxxxx" MODEL = "jina-embeddings-v2-base-en" # or "jina-embeddings-v2-base-en" EMBEDDING_SIZE = 768 # 512 for small variant # Get embeddings from the API url = "https://api.jina.ai/v1/embeddings" headers = { "Content-Type": "application/json", "Authorization": f"Bearer {JINA_API_KEY}", } data = { "input": ["Your text string goes here", "You can send multiple texts"], "model": MODEL, } response = requests.post(url, headers=headers, json=data) embeddings = [d["embedding"] for d in response.json()["data"]] # Index the embeddings into Qdrant client = qdrant_client.QdrantClient(":memory:") client.create_collection( collection_name="MyCollection", vectors_config=VectorParams(size=EMBEDDING_SIZE, distance=Distance.DOT), ) qdrant_client.upsert( collection_name="MyCollection", points=Batch( ids=list(range(len(embeddings))), vectors=embeddings, ), ) ```
documentation/embeddings/jina-embeddings.md
--- title: Upstage weight: 3100 --- # Upstage Qdrant supports working with the Solar Embeddings API from [Upstage](https://upstage.ai/). [Solar Embeddings](https://developers.upstage.ai/docs/apis/embeddings) API features dual models for user queries and document embedding, within a unified vector space, designed for performant text processing. You can generate an API key to authenticate the requests from the [Upstage Console](<https://console.upstage.ai/api-keys>). ### Setting up the Qdrant client and Upstage session ```python import requests from qdrant_client import QdrantClient UPSTAGE_BASE_URL = "https://api.upstage.ai/v1/solar/embeddings" UPSTAGE_API_KEY = "<YOUR_API_KEY>" upstage_session = requests.Session() client = QdrantClient(url="http://localhost:6333") headers = { "Authorization": f"Bearer {UPSTAGE_API_KEY}", "Accept": "application/json", } texts = [ "Qdrant is the best vector search engine!", "Loved by Enterprises and everyone building for low latency, high performance, and scale.", ] ``` ```typescript import { QdrantClient } from '@qdrant/js-client-rest'; const UPSTAGE_BASE_URL = "https://api.upstage.ai/v1/solar/embeddings" const UPSTAGE_API_KEY = "<YOUR_API_KEY>" const client = new QdrantClient({ url: 'http://localhost:6333' }); const headers = { "Authorization": "Bearer " + UPSTAGE_API_KEY, "Accept": "application/json", "Content-Type": "application/json" } const texts = [ "Qdrant is the best vector search engine!", "Loved by Enterprises and everyone building for low latency, high performance, and scale.", ] ``` The following example shows how to embed documents with the recommended `solar-embedding-1-large-passage` and `solar-embedding-1-large-query` models that generates sentence embeddings of size 4096. ### Embedding documents ```python body = { "input": texts, "model": "solar-embedding-1-large-passage", } response_body = upstage_session.post( UPSTAGE_BASE_URL, headers=headers, json=body ).json() ``` ```typescript let body = { "input": texts, "model": "solar-embedding-1-large-passage", } let response = await fetch(UPSTAGE_BASE_URL, { method: "POST", body: JSON.stringify(body), headers }); let response_body = await response.json() ``` ### Converting the model outputs to Qdrant points ```python from qdrant_client.models import PointStruct points = [ PointStruct( id=idx, vector=data["embedding"], payload={"text": text}, ) for idx, (data, text) in enumerate(zip(response_body["data"], texts)) ] ``` ```typescript let points = response_body.data.map((data, i) => { return { id: i, vector: data.embedding, payload: { text: texts[i] } } }) ``` ### Creating a collection to insert the documents ```python from qdrant_client.models import VectorParams, Distance collection_name = "example_collection" client.create_collection( collection_name, vectors_config=VectorParams( size=4096, distance=Distance.COSINE, ), ) client.upsert(collection_name, points) ``` ```typescript const COLLECTION_NAME = "example_collection" await client.createCollection(COLLECTION_NAME, { vectors: { size: 4096, distance: 'Cosine', } }); await client.upsert(COLLECTION_NAME, { wait: true, points }) ``` ## Searching for documents with Qdrant Once all the documents are added, you can search for the most relevant documents. ```python body = { "input": "What is the best to use for vector search scaling?", "model": "solar-embedding-1-large-query", } response_body = upstage_session.post( UPSTAGE_BASE_URL, headers=headers, json=body ).json() client.search( collection_name=collection_name, query_vector=response_body["data"][0]["embedding"], ) ``` ```typescript body = { "input": "What is the best to use for vector search scaling?", "model": "solar-embedding-1-large-query", } response = await fetch(UPSTAGE_BASE_URL, { method: "POST", body: JSON.stringify(body), headers }); response_body = await response.json() await client.search(COLLECTION_NAME, { vector: response_body.data[0].embedding, }); ```
documentation/embeddings/upstage.md
--- title: John Snow Labs weight: 2000 --- # Using John Snow Labs with Qdrant John Snow Labs offers a variety of models, particularly in the healthcare domain. They have pre-trained models that can generate embeddings for medical text data. ## Installation You can install the required package using the following pip command: ```bash pip install johnsnowlabs ``` Here is an example of how you might obtain embeddings using John Snow Labs's API and store them in a Qdrant collection: ```python import qdrant_client from qdrant_client.models import Batch from johnsnowlabs import nlp # Load the pre-trained model, for example, a named entity recognition (NER) model model = nlp.load_model("ner_jsl") # Sample text to generate embeddings text = "John Snow Labs provides state-of-the-art healthcare NLP solutions." # Generate embeddings for the text document = nlp.DocumentAssembler().setInput(text) embeddings = model.transform(document).collectEmbeddings() # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333) # Upsert the embeddings into Qdrant qdrant_client.upsert( collection_name="HealthcareNLP", points=Batch( ids=[1], # This would be your unique ID for the data point vectors=[embeddings], ) ) ```
documentation/embeddings/johnsnow.md
--- title: Embeddings weight: 15 --- # Supported Embedding Providers & Models Qdrant supports all available text and multimodal dense vector embedding models as well as vector embedding services without any limitations. ## Some of the Embeddings you can use with Qdrant: SentenceTransformers, BERT, SBERT, Clip, OpenClip, Open AI, Vertex AI, Azure AI, AWS Bedrock, Jina AI, Upstage AI, Mistral AI, Cohere AI, Voyage AI, Aleph Alpha, Baidu Qianfan, BGE, Instruct, Watsonx Embeddings, Snowflake Embeddings, NVIDIA NeMo, Nomic, OCI Embeddings, Ollama Embeddings, MixedBread, Together AI, Clarifai, Databricks Embeddings, GPT4All Embeddings, John Snow Labs Embeddings. Additionally, [any open-source embeddings from HuggingFace](https://huggingface.co/spaces/mteb/leaderboard) can be used with Qdrant. ## Code samples: | Embeddings Providers | Description | | ----------------------------- | ----------- | | [Aleph Alpha](./aleph-alpha/) | Multilingual embeddings focused on European languages. | | [Azure](./azure/) | Microsoft's embedding model selection. | | [Bedrock](./bedrock/) | AWS managed service for foundation models and embeddings. | | [Clarifai](./clarifai/) | Embeddings for image and video recognition. | | [Clip](./clip/) | Aligns images and text, created by OpenAI. | | [Cohere](./cohere/) | Language model embeddings for NLP tasks. | | [Databricks](./databricks/) | Scalable embeddings integrated with Apache Spark. | | [Gemini](./gemini/) | Google’s multimodal embeddings for text and vision. | | [GPT4All](./gpt4all/) | Open-source, local embeddings for privacy-focused use. | | [GradientAI](./gradient/) | AI Models for custom enterprise tasks.| | [Instruct](./instruct/) | Embeddings tuned for following instructions. | | [Jina AI](./jina-embeddings/) | Customizable embeddings for neural search. | | [John Snow Labs](./johnsnow/) | Medical and clinical embeddings. | | [Mistral](./mistral/) | Open-source, efficient language model embeddings. | | [MixedBread](./mixedbread/) | Lightweight embeddings for constrained environments. | | [Nomic](./nomic/) | Embeddings for data visualization. | | [Nvidia](./nvidia/) | GPU-optimized embeddings from Nvidia. | | [OCI](./oci/) | Oracle Cloud’s AI service with embeddings. | | [Ollama](./ollama/) | Embeddings for conversational AI. | | [OpenAI](./openai/) | Industry-leading embeddings for NLP. | | [OpenCLIP](./openclip/) | OS implementation of CLIP for image and text. | | [Prem AI](./premai/) | Precise language embeddings. | | [Snowflake](./snowflake/) | Scalable embeddings for big data. | | [Together AI](./togetherai/) | Community-driven, open-source embeddings. | | [Upstage](./upstage/) | Embeddings for speech and language tasks. | | [Voyage AI](./voyage/) | Navigation and spatial understanding embeddings. | | [Watsonx](./watsonx/) | IBM's enterprise-grade embeddings. |
documentation/embeddings/_index.md
--- title: MixedBread weight: 2200 --- # Using MixedBread with Qdrant MixedBread is a unique provider offering embeddings across multiple domains. Their models are versatile for various search tasks when integrated with Qdrant. MixedBread is creating state-of-the-art models and tools that make search smarter, faster, and more relevant. Whether you're building a next-gen search engine or RAG (Retrieval Augmented Generation) systems, or whether you're enhancing your existing search solution, they've got the ingredients to make it happen. ## Installation You can install the required package using the following pip command: ```bash pip install mixedbread ``` ## Integration Example Below is an example of how to obtain embeddings using MixedBread's API and store them in a Qdrant collection: ```python import qdrant_client from qdrant_client.models import Batch from mixedbread import MixedBreadModel # Initialize MixedBread model model = MixedBreadModel("mixedbread-variant") # Generate embeddings text = "MixedBread provides versatile embeddings for various domains." embeddings = model.embed(text) # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host="localhost", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name="VersatileEmbeddings", points=Batch( ids=[1], vectors=[embeddings], ) ) ```
documentation/embeddings/mixedbread.md
--- title: Azure OpenAI weight: 950 --- # Using Azure OpenAI with Qdrant Azure OpenAI is Microsoft's platform for AI embeddings, focusing on powerful text and data analytics. These embeddings are suitable for high-precision vector searches in Qdrant. ## Installation You can install the required packages using the following pip command: ```bash pip install openai azure-identity python-dotenv qdrant-client ``` ## Code Example ```python import os import openai import dotenv import qdrant_client from qdrant_client.models import Batch from azure.identity import DefaultAzureCredential, get_bearer_token_provider dotenv.load_dotenv() # Set to True if using Azure Active Directory for authentication use_azure_active_directory = False # Qdrant client setup qdrant_client = qdrant_client.QdrantClient(url="http://localhost:6333") # Azure OpenAI Authentication if not use_azure_active_directory: endpoint = os.environ["AZURE_OPENAI_ENDPOINT"] api_key = os.environ["AZURE_OPENAI_API_KEY"] client = openai.AzureOpenAI( azure_endpoint=endpoint, api_key=api_key, api_version="2023-09-01-preview" ) else: endpoint = os.environ["AZURE_OPENAI_ENDPOINT"] client = openai.AzureOpenAI( azure_endpoint=endpoint, azure_ad_token_provider=get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"), api_version="2023-09-01-preview" ) # Deployment name of the model in Azure OpenAI Studio deployment = "your-deployment-name" # Replace with your deployment name # Generate embeddings using the Azure OpenAI client text_input = "The food was delicious and the waiter..." embeddings_response = client.embeddings.create( model=deployment, input=text_input ) # Extract the embedding vector from the response embedding_vector = embeddings_response.data[0].embedding # Insert the embedding into Qdrant qdrant_client.upsert( collection_name="MyCollection", points=Batch( ids=[1], # This ID can be dynamically assigned or managed vectors=[embedding_vector], ) ) print("Embedding successfully upserted into Qdrant.") ```
documentation/embeddings/azure.md
--- title: Database Optimization weight: 2 --- # Frequently Asked Questions: Database Optimization ### How do I reduce memory usage? The primary source of memory usage is vector data. There are several ways to address that: - Configure [Quantization](../../guides/quantization/) to reduce the memory usage of vectors. - Configure on-disk vector storage The choice of the approach depends on your requirements. Read more about [configuring the optimal](../../tutorials/optimize/) use of Qdrant. ### How do you choose the machine configuration? There are two main scenarios of Qdrant usage in terms of resource consumption: - **Performance-optimized** -- when you need to serve vector search as fast (many) as possible. In this case, you need to have as much vector data in RAM as possible. Use our [calculator](https://cloud.qdrant.io/calculator) to estimate the required RAM. - **Storage-optimized** -- when you need to store many vectors and minimize costs by compromising some search speed. In this case, pay attention to the disk speed instead. More about it in the article about [Memory Consumption](../../../articles/memory-consumption/). ### I configured on-disk vector storage, but memory usage is still high. Why? Firstly, memory usage metrics as reported by `top` or `htop` may be misleading. They are not showing the minimal amount of memory required to run the service. If the RSS memory usage is 10 GB, it doesn't mean that it won't work on a machine with 8 GB of RAM. Qdrant uses many techniques to reduce search latency, including caching disk data in RAM and preloading data from disk to RAM. As a result, the Qdrant process might use more memory than the minimum required to run the service. > Unused RAM is wasted RAM If you want to limit the memory usage of the service, we recommend using [limits in Docker](https://docs.docker.com/config/containers/resource_constraints/#memory) or Kubernetes. ### My requests are very slow or time out. What should I do? There are several possible reasons for that: - **Using filters without payload index** -- If you're performing a search with a filter but you don't have a payload index, Qdrant will have to load whole payload data from disk to check the filtering condition. Ensure you have adequately configured [payload indexes](../../concepts/indexing/#payload-index). - **Usage of on-disk vector storage with slow disks** -- If you're using on-disk vector storage, ensure you have fast enough disks. We recommend using local SSDs with at least 50k IOPS. Read more about the influence of the disk speed on the search latency in the article about [Memory Consumption](../../../articles/memory-consumption/). - **Large limit or non-optimal query parameters** -- A large limit or offset might lead to significant performance degradation. Please pay close attention to the query/collection parameters that significantly diverge from the defaults. They might be the reason for the performance issues.
documentation/faq/database-optimization.md
--- title: Qdrant Fundamentals weight: 1 --- # Frequently Asked Questions: General Topics |||||| |-|-|-|-|-| |[Vectors](/documentation/faq/qdrant-fundamentals/#vectors)|[Search](/documentation/faq/qdrant-fundamentals/#search)|[Collections](/documentation/faq/qdrant-fundamentals/#collections)|[Compatibility](/documentation/faq/qdrant-fundamentals/#compatibility)|[Cloud](/documentation/faq/qdrant-fundamentals/#cloud)| ## Vectors ### What is the maximum vector dimension supported by Qdrant? Qdrant supports up to 65,535 dimensions by default, but this can be configured to support higher dimensions. ### What is the maximum size of vector metadata that can be stored? There is no inherent limitation on metadata size, but it should be [optimized for performance and resource usage](/documentation/guides/optimize/). Users can set upper limits in the configuration. ### Can the same similarity search query yield different results on different machines? Yes, due to differences in hardware configurations and parallel processing, results may vary slightly. ### What to do with documents with small chunks using a fixed chunk strategy? For documents with small chunks, consider merging chunks or using variable chunk sizes to optimize vector representation and search performance. ### How do I choose the right vector embeddings for my use case? This depends on the nature of your data and the specific application. Consider factors like dimensionality, domain-specific models, and the performance characteristics of different embeddings. ### How does Qdrant handle different vector embeddings from various providers in the same collection? Qdrant natively [supports multiple vectors per data point](/documentation/concepts/vectors/#multivectors), allowing different embeddings from various providers to coexist within the same collection. ### Can I migrate my embeddings from another vector store to Qdrant? Yes, Qdrant supports migration of embeddings from other vector stores, facilitating easy transitions and adoption of Qdrant’s features. ## Search ### How does Qdrant handle real-time data updates and search? Qdrant supports live updates for vector data, with newly inserted, updated and deleted vectors available for immediate search. The system uses full-scan search on unindexed segments during background index updates. ### My search results contain vectors with null values. Why? By default, Qdrant tries to minimize network traffic and doesn't return vectors in search results. But you can force Qdrant to do so by setting the `with_vector` parameter of the Search/Scroll to `true`. If you're still seeing `"vector": null` in your results, it might be that the vector you're passing is not in the correct format, or there's an issue with how you're calling the upsert method. ### How can I search without a vector? You are likely looking for the [scroll](../../concepts/points/#scroll-points) method. It allows you to retrieve the records based on filters or even iterate over all the records in the collection. ### Does Qdrant support a full-text search or a hybrid search? Qdrant is a vector search engine in the first place, and we only implement full-text support as long as it doesn't compromise the vector search use case. That includes both the interface and the performance. What Qdrant can do: - Search with full-text filters - Apply full-text filters to the vector search (i.e., perform vector search among the records with specific words or phrases) - Do prefix search and semantic [search-as-you-type](../../../articles/search-as-you-type/) - Sparse vectors, as used in [SPLADE](https://github.com/naver/splade) or similar models - [Multi-vectors](../../concepts/vectors/#multivectors), for example ColBERT and other late-interaction models - Combination of the [multiple searches](../../concepts/hybrid-queries/) What Qdrant doesn't plan to support: - Non-vector-based retrieval or ranking functions - Built-in ontologies or knowledge graphs - Query analyzers and other NLP tools Of course, you can always combine Qdrant with any specialized tool you need, including full-text search engines. Read more about [our approach](../../../articles/hybrid-search/) to hybrid search. ## Collections ### How many collections can I create? As many as you want, but be aware that each collection requires additional resources. It is _highly_ recommended not to create many small collections, as it will lead to significant resource consumption overhead. We consider creating a collection for each user/dialog/document as an antipattern. Please read more about collections, isolation, and multiple users in our [Multitenancy](../../tutorials/multiple-partitions/) tutorial. ### How do I upload a large number of vectors into a Qdrant collection? Read about our recommendations in the [bulk upload](../../tutorials/bulk-upload/) tutorial. ### Can I only store quantized vectors and discard full precision vectors? No, Qdrant requires full precision vectors for operations like reindexing, rescoring, etc. ## Compatibility ### Is Qdrant compatible with CPUs or GPUs for vector computation? Qdrant primarily relies on CPU acceleration for scalability and efficiency, with no current support for GPU acceleration. ### Do you guarantee compatibility across versions? In case your version is older, we only guarantee compatibility between two consecutive minor versions. This also applies to client versions. Ensure your client version is never more than one minor version away from your cluster version. While we will assist with break/fix troubleshooting of issues and errors specific to our products, Qdrant is not accountable for reviewing, writing (or rewriting), or debugging custom code. ### Do you support downgrades? We do not support downgrading a cluster on any of our products. If you deploy a newer version of Qdrant, your data is automatically migrated to the newer storage format. This migration is not reversible. ### How do I avoid issues when updating to the latest version? We only guarantee compatibility if you update between consecutive versions. You would need to upgrade versions one at a time: `1.1 -> 1.2`, then `1.2 -> 1.3`, then `1.3 -> 1.4`. ## Cloud ### Is it possible to scale down a Qdrant Cloud cluster? It is possible to vertically scale down a Qdrant Cloud cluster, as long as the disk size is not reduced. Horizontal downscaling is currently not possible, but on our roadmap. But in some cases, we might be able to help you with that manually. Please open a support ticket, so that we can assist.
documentation/faq/qdrant-fundamentals.md
--- title: FAQ weight: 22 is_empty: true ---
documentation/faq/_index.md
--- title: Airbyte aliases: [ ../integrations/airbyte/, ../frameworks/airbyte/ ] --- # Airbyte [Airbyte](https://airbyte.com/) is an open-source data integration platform that helps you replicate your data between different systems. It has a [growing list of connectors](https://docs.airbyte.io/integrations) that can be used to ingest data from multiple sources. Building data pipelines is also crucial for managing the data in Qdrant, and Airbyte is a great tool for this purpose. Airbyte may take care of the data ingestion from a selected source, while Qdrant will help you to build a search engine on top of it. There are three supported modes of how the data can be ingested into Qdrant: * **Full Refresh Sync** * **Incremental - Append Sync** * **Incremental - Append + Deduped** You can read more about these modes in the [Airbyte documentation](https://docs.airbyte.io/integrations/destinations/qdrant). ## Prerequisites Before you start, make sure you have the following: 1. Airbyte instance, either [Open Source](https://airbyte.com/solutions/airbyte-open-source), [Self-Managed](https://airbyte.com/solutions/airbyte-enterprise), or [Cloud](https://airbyte.com/solutions/airbyte-cloud). 2. Running instance of Qdrant. It has to be accessible by URL from the machine where Airbyte is running. You can follow the [installation guide](/documentation/guides/installation/) to set up Qdrant. ## Setting up Qdrant as a destination Once you have a running instance of Airbyte, you can set up Qdrant as a destination directly in the UI. Airbyte's Qdrant destination is connected with a single collection in Qdrant. ![Airbyte Qdrant destination](/documentation/frameworks/airbyte/qdrant-destination.png) ### Text processing Airbyte has some built-in mechanisms to transform your texts into embeddings. You can choose how you want to chunk your fields into pieces before calculating the embeddings, but also which fields should be used to create the point payload. ![Processing settings](/documentation/frameworks/airbyte/processing.png) ### Embeddings You can choose the model that will be used to calculate the embeddings. Currently, Airbyte supports multiple models, including OpenAI and Cohere. ![Embeddings settings](/documentation/frameworks/airbyte/embedding.png) Using some precomputed embeddings from your data source is also possible. In this case, you can pass the field name containing the embeddings and their dimensionality. ![Precomputed embeddings settings](/documentation/frameworks/airbyte/precomputed-embedding.png) ### Qdrant connection details Finally, we can configure the target Qdrant instance and collection. In case you use the built-in authentication mechanism, here is where you can pass the token. ![Qdrant connection details](/documentation/frameworks/airbyte/qdrant-config.png) Once you confirm creating the destination, Airbyte will test if a specified Qdrant cluster is accessible and might be used as a destination. ## Setting up connection Airbyte combines sources and destinations into a single entity called a connection. Once you have a destination configured and a source, you can create a connection between them. It doesn't matter what source you use, as long as Airbyte supports it. The process is pretty straightforward, but depends on the source you use. ![Airbyte connection](/documentation/frameworks/airbyte/connection.png) ## Further Reading * [Airbyte documentation](https://docs.airbyte.com/understanding-airbyte/connections/). * [Source Code](https://github.com/airbytehq/airbyte/tree/master/airbyte-integrations/connectors/destination-qdrant)
documentation/data-management/airbyte.md
--- title: Apache Spark aliases: [ ../integrations/spark/, ../frameworks/spark/ ] --- # Apache Spark [Spark](https://spark.apache.org/) is a distributed computing framework designed for big data processing and analytics. The [Qdrant-Spark connector](https://github.com/qdrant/qdrant-spark) enables Qdrant to be a storage destination in Spark. ## Installation You can set up the Qdrant-Spark Connector in a few different ways, depending on your preferences and requirements. ### GitHub Releases The simplest way to get started is by downloading pre-packaged JAR file releases from the [GitHub releases page](https://github.com/qdrant/qdrant-spark/releases). These JAR files come with all the necessary dependencies. ### Building from Source If you prefer to build the JAR from source, you'll need [JDK 8](https://www.azul.com/downloads/#zulu) and [Maven](https://maven.apache.org/) installed on your system. Once you have the prerequisites in place, navigate to the project's root directory and run the following command: ```bash mvn package ``` This command will compile the source code and generate a fat JAR, which will be stored in the `target` directory by default. ### Maven Central For use with Java and Scala projects, the package can be found [here](https://central.sonatype.com/artifact/io.qdrant/spark). ## Usage Below, we'll walk through the steps of creating a Spark session with Qdrant support and loading data into Qdrant. ### Creating a single-node Spark session with Qdrant Support To begin, import the necessary libraries and create a Spark session with Qdrant support: ```python from pyspark.sql import SparkSession spark = SparkSession.builder.config( "spark.jars", "spark-VERSION.jar", # Specify the downloaded JAR file ) .master("local[*]") .appName("qdrant") .getOrCreate() ``` ```scala import org.apache.spark.sql.SparkSession val spark = SparkSession.builder .config("spark.jars", "spark-VERSION.jar") // Specify the downloaded JAR file .master("local[*]") .appName("qdrant") .getOrCreate() ``` ```java import org.apache.spark.sql.SparkSession; public class QdrantSparkJavaExample { public static void main(String[] args) { SparkSession spark = SparkSession.builder() .config("spark.jars", "spark-VERSION.jar") // Specify the downloaded JAR file .master("local[*]") .appName("qdrant") .getOrCreate(); } } ``` ### Loading data into Qdrant <aside role="status">Before loading the data using this connector, a collection has to be <a href="/documentation/concepts/collections/#create-a-collection">created</a> in advance with the appropriate vector dimensions and configurations.</aside> The connector supports ingesting multiple named/unnamed, dense/sparse vectors. _Click each to expand._ <details> <summary><b>Unnamed/Default vector</b></summary> ```python <pyspark.sql.DataFrame> .write .format("io.qdrant.spark.Qdrant") .option("qdrant_url", <QDRANT_GRPC_URL>) .option("collection_name", <QDRANT_COLLECTION_NAME>) .option("embedding_field", <EMBEDDING_FIELD_NAME>) # Expected to be a field of type ArrayType(FloatType) .option("schema", <pyspark.sql.DataFrame>.schema.json()) .mode("append") .save() ``` </details> <details> <summary><b>Named vector</b></summary> ```python <pyspark.sql.DataFrame> .write .format("io.qdrant.spark.Qdrant") .option("qdrant_url", <QDRANT_GRPC_URL>) .option("collection_name", <QDRANT_COLLECTION_NAME>) .option("embedding_field", <EMBEDDING_FIELD_NAME>) # Expected to be a field of type ArrayType(FloatType) .option("vector_name", <VECTOR_NAME>) .option("schema", <pyspark.sql.DataFrame>.schema.json()) .mode("append") .save() ``` > #### NOTE > > The `embedding_field` and `vector_name` options are maintained for backward compatibility. It is recommended to use `vector_fields` and `vector_names` for named vectors as shown below. </details> <details> <summary><b>Multiple named vectors</b></summary> ```python <pyspark.sql.DataFrame> .write .format("io.qdrant.spark.Qdrant") .option("qdrant_url", "<QDRANT_GRPC_URL>") .option("collection_name", "<QDRANT_COLLECTION_NAME>") .option("vector_fields", "<COLUMN_NAME>,<ANOTHER_COLUMN_NAME>") .option("vector_names", "<VECTOR_NAME>,<ANOTHER_VECTOR_NAME>") .option("schema", <pyspark.sql.DataFrame>.schema.json()) .mode("append") .save() ``` </details> <details> <summary><b>Sparse vectors</b></summary> ```python <pyspark.sql.DataFrame> .write .format("io.qdrant.spark.Qdrant") .option("qdrant_url", "<QDRANT_GRPC_URL>") .option("collection_name", "<QDRANT_COLLECTION_NAME>") .option("sparse_vector_value_fields", "<COLUMN_NAME>") .option("sparse_vector_index_fields", "<COLUMN_NAME>") .option("sparse_vector_names", "<SPARSE_VECTOR_NAME>") .option("schema", <pyspark.sql.DataFrame>.schema.json()) .mode("append") .save() ``` </details> <details> <summary><b>Multiple sparse vectors</b></summary> ```python <pyspark.sql.DataFrame> .write .format("io.qdrant.spark.Qdrant") .option("qdrant_url", "<QDRANT_GRPC_URL>") .option("collection_name", "<QDRANT_COLLECTION_NAME>") .option("sparse_vector_value_fields", "<COLUMN_NAME>,<ANOTHER_COLUMN_NAME>") .option("sparse_vector_index_fields", "<COLUMN_NAME>,<ANOTHER_COLUMN_NAME>") .option("sparse_vector_names", "<SPARSE_VECTOR_NAME>,<ANOTHER_SPARSE_VECTOR_NAME>") .option("schema", <pyspark.sql.DataFrame>.schema.json()) .mode("append") .save() ``` </details> <details> <summary><b>Combination of named dense and sparse vectors</b></summary> ```python <pyspark.sql.DataFrame> .write .format("io.qdrant.spark.Qdrant") .option("qdrant_url", "<QDRANT_GRPC_URL>") .option("collection_name", "<QDRANT_COLLECTION_NAME>") .option("vector_fields", "<COLUMN_NAME>,<ANOTHER_COLUMN_NAME>") .option("vector_names", "<VECTOR_NAME>,<ANOTHER_VECTOR_NAME>") .option("sparse_vector_value_fields", "<COLUMN_NAME>,<ANOTHER_COLUMN_NAME>") .option("sparse_vector_index_fields", "<COLUMN_NAME>,<ANOTHER_COLUMN_NAME>") .option("sparse_vector_names", "<SPARSE_VECTOR_NAME>,<ANOTHER_SPARSE_VECTOR_NAME>") .option("schema", <pyspark.sql.DataFrame>.schema.json()) .mode("append") .save() ``` </details> <details> <summary><b>No vectors - Entire dataframe is stored as payload</b></summary> ```python <pyspark.sql.DataFrame> .write .format("io.qdrant.spark.Qdrant") .option("qdrant_url", "<QDRANT_GRPC_URL>") .option("collection_name", "<QDRANT_COLLECTION_NAME>") .option("schema", <pyspark.sql.DataFrame>.schema.json()) .mode("append") .save() ``` </details> ## Databricks <aside role="status"> <p>Check out our <a href="/documentation/send-data/databricks/" target="_blank">example</a> of using the Spark connector with Databricks.</p> </aside> You can use the `qdrant-spark` connector as a library in [Databricks](https://www.databricks.com/). - Go to the `Libraries` section in your Databricks cluster dashboard. - Select `Install New` to open the library installation modal. - Search for `io.qdrant:spark:VERSION` in the Maven packages and click `Install`. ![Databricks](/documentation/frameworks/spark/databricks.png) ## Datatype Support Qdrant supports all the Spark data types, and the appropriate data types are mapped based on the provided schema. ## Configuration Options | Option | Description | Column DataType | Required | | :--------------------------- | :------------------------------------------------------------------ | :---------------------------- | :------- | | `qdrant_url` | GRPC URL of the Qdrant instance. Eg: <http://localhost:6334> | - | ✅ | | `collection_name` | Name of the collection to write data into | - | ✅ | | `schema` | JSON string of the dataframe schema | - | ✅ | | `embedding_field` | Name of the column holding the embeddings | `ArrayType(FloatType)` | ❌ | | `id_field` | Name of the column holding the point IDs. Default: Random UUID | `StringType` or `IntegerType` | ❌ | | `batch_size` | Max size of the upload batch. Default: 64 | - | ❌ | | `retries` | Number of upload retries. Default: 3 | - | ❌ | | `api_key` | Qdrant API key for authentication | - | ❌ | | `vector_name` | Name of the vector in the collection. | - | ❌ | | `vector_fields` | Comma-separated names of columns holding the vectors. | `ArrayType(FloatType)` | ❌ | | `vector_names` | Comma-separated names of vectors in the collection. | - | ❌ | | `sparse_vector_index_fields` | Comma-separated names of columns holding the sparse vector indices. | `ArrayType(IntegerType)` | ❌ | | `sparse_vector_value_fields` | Comma-separated names of columns holding the sparse vector values. | `ArrayType(FloatType)` | ❌ | | `sparse_vector_names` | Comma-separated names of the sparse vectors in the collection. | - | ❌ | | `shard_key_selector` | Comma-separated names of custom shard keys to use during upsert. | - | ❌ | For more information, be sure to check out the [Qdrant-Spark GitHub repository](https://github.com/qdrant/qdrant-spark). The Apache Spark guide is available [here](https://spark.apache.org/docs/latest/quick-start.html). Happy data processing!
documentation/data-management/spark.md
--- title: Confluent Kafka aliases: [ ../frameworks/confluent/ ] --- ![Confluent Logo](/documentation/frameworks/confluent/confluent-logo.png) Built by the original creators of Apache Kafka®, [Confluent Cloud](https://www.confluent.io/confluent-cloud/?utm_campaign=tm.pmm_cd.cwc_partner_Qdrant_generic&utm_source=Qdrant&utm_medium=partnerref) is a cloud-native and complete data streaming platform available on AWS, Azure, and Google Cloud. The platform includes a fully managed, elastically scaling Kafka engine, 120+ connectors, serverless Apache Flink®, enterprise-grade security controls, and a robust governance suite. With our [Qdrant-Kafka Sink Connector](https://github.com/qdrant/qdrant-kafka), Qdrant is part of the [Connect with Confluent](https://www.confluent.io/partners/connect/) technology partner program. It brings fully managed data streams directly to organizations from Confluent Cloud, making it easier for organizations to stream any data to Qdrant with a fully managed Apache Kafka service. ## Usage ### Pre-requisites - A Confluent Cloud account. You can begin with a [free trial](https://www.confluent.io/confluent-cloud/tryfree/?utm_campaign=tm.pmm_cd.cwc_partner_qdrant_tryfree&utm_source=qdrant&utm_medium=partnerref) with credits for the first 30 days. - Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/). ### Installation 1) Download the latest connector zip file from [Confluent Hub](https://www.confluent.io/hub/qdrant/qdrant-kafka). 2) Configure an environment and cluster on Confluent and create a topic to produce messages for. 3) Navigate to the `Connectors` section of the Confluent cluster and click `Add Plugin`. Upload the zip file with the following info. ![Qdrant Connector Install](/documentation/frameworks/confluent/install.png) 4) Once installed, navigate to the connector and set the following configuration values. ![Qdrant Connector Config](/documentation/frameworks/confluent/config.png) Replace the placeholder values with your credentials. 5) Add the Qdrant instance host to the allowed networking endpoints. ![Qdrant Connector Endpoint](/documentation/frameworks/confluent/endpoint.png) 7) Start the connector. ## Producing Messages You can now produce messages for the configured topic, and they'll be written into the configured Qdrant instance. ![Qdrant Connector Message](/documentation/frameworks/confluent/message.png) ## Message Formats The connector supports messages in the following formats. _Click each to expand._ <details> <summary><b>Unnamed/Default vector</b></summary> Reference: [Creating a collection with a default vector](https://qdrant.tech/documentation/concepts/collections/#create-a-collection). ```json { "collection_name": "{collection_name}", "id": 1, "vector": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 ], "payload": { "name": "kafka", "description": "Kafka is a distributed streaming platform", "url": "https://kafka.apache.org/" } } ``` </details> <details> <summary><b>Named multiple vectors</b></summary> Reference: [Creating a collection with multiple vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors). ```json { "collection_name": "{collection_name}", "id": 1, "vector": { "some-dense": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 ], "some-other-dense": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 ] }, "payload": { "name": "kafka", "description": "Kafka is a distributed streaming platform", "url": "https://kafka.apache.org/" } } ``` </details> <details> <summary><b>Sparse vectors</b></summary> Reference: [Creating a collection with sparse vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-sparse-vectors). ```json { "collection_name": "{collection_name}", "id": 1, "vector": { "some-sparse": { "indices": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ], "values": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 ] } }, "payload": { "name": "kafka", "description": "Kafka is a distributed streaming platform", "url": "https://kafka.apache.org/" } } ``` </details> <details> <summary><b>Multi-vectors</b></summary> Reference: - [Multi-vectors](https://qdrant.tech/documentation/concepts/vectors/#multivectors) ```json { "collection_name": "{collection_name}", "id": 1, "vector": { "some-multi": [ [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 ], [ 1.0, 0.9, 0.8, 0.5, 0.4, 0.8, 0.6, 0.4, 0.2, 0.1 ] ] }, "payload": { "name": "kafka", "description": "Kafka is a distributed streaming platform", "url": "https://kafka.apache.org/" } } ``` </details> <details> <summary><b>Combination of named dense and sparse vectors</b></summary> Reference: - [Creating a collection with multiple vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors). - [Creating a collection with sparse vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-sparse-vectors). ```json { "collection_name": "{collection_name}", "id": "a10435b5-2a58-427a-a3a0-a5d845b147b7", "vector": { "some-other-dense": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 ], "some-sparse": { "indices": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ], "values": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 ] } }, "payload": { "name": "kafka", "description": "Kafka is a distributed streaming platform", "url": "https://kafka.apache.org/" } } ``` </details> ## Further Reading - [Kafka Connect Docs](https://docs.confluent.io/platform/current/connect/index.html) - [Confluent Connectors Docs](https://docs.confluent.io/cloud/current/connectors/bring-your-connector/custom-connector-qs.html)
documentation/data-management/confluent.md
--- title: Redpanda Connect --- ![Redpanda Cover](/documentation/data-management/redpanda/redpanda-cover.png) [Redpanda Connect](https://www.redpanda.com/connect) is a declarative data-agnostic streaming service designed for efficient, stateless processing steps. It offers transaction-based resiliency with back pressure, ensuring at-least-once delivery when connecting to at-least-once sources with sinks, without the need to persist messages during transit. Connect pipelines are configured using a YAML file, which organizes components hierarchically. Each section represents a different component type, such as inputs, processors and outputs, and these can have nested child components and [dynamic values](https://docs.redpanda.com/redpanda-connect/configuration/interpolation/). The [Qdrant Output](https://docs.redpanda.com/redpanda-connect/components/outputs/qdrant/) component enables streaming vector data into Qdrant collections in your RedPanda pipelines. ## Example An example configuration of the output once the inputs and processors are set, would look like: ```yaml input: # https://docs.redpanda.com/redpanda-connect/components/inputs/about/ pipeline: processors: # https://docs.redpanda.com/redpanda-connect/components/processors/about/ output: label: "qdrant-output" qdrant: max_in_flight: 64 batching: count: 8 grpc_host: xyz-example.eu-central.aws.cloud.qdrant.io:6334 api_token: "<provide-your-own-key>" tls: enabled: true # skip_cert_verify: false # enable_renegotiation: false # root_cas: "" # root_cas_file: "" # client_certs: [] collection_name: "<collection_name>" id: root = uuid_v4() vector_mapping: 'root = {"some_dense": this.vector, "some_sparse": {"indices": [23,325,532],"values": [0.352,0.532,0.532]}}' payload_mapping: 'root = {"field": this.value, "field_2": 987}' ``` ## Further Reading - [Getting started with Connect](https://docs.redpanda.com/redpanda-connect/guides/getting_started/) - [Qdrant Output Reference](https://docs.redpanda.com/redpanda-connect/components/outputs/qdrant/)
documentation/data-management/redpanda.md
--- title: DLT aliases: [ ../integrations/dlt/, ../frameworks/dlt/ ] --- # DLT(Data Load Tool) [DLT](https://dlthub.com/) is an open-source library that you can add to your Python scripts to load data from various and often messy data sources into well-structured, live datasets. With the DLT-Qdrant integration, you can now select Qdrant as a DLT destination to load data into. **DLT Enables** - Automated maintenance - with schema inference, alerts and short declarative code, maintenance becomes simple. - Run it where Python runs - on Airflow, serverless functions, notebooks. Scales on micro and large infrastructure alike. - User-friendly, declarative interface that removes knowledge obstacles for beginners while empowering senior professionals. ## Usage To get started, install `dlt` with the `qdrant` extra. ```bash pip install "dlt[qdrant]" ``` Configure the destination in the DLT secrets file. The file is located at `~/.dlt/secrets.toml` by default. Add the following section to the secrets file. ```toml [destination.qdrant.credentials] location = "https://your-qdrant-url" api_key = "your-qdrant-api-key" ``` The location will default to `http://localhost:6333` and `api_key` is not defined - which are the defaults for a local Qdrant instance. Find more information about DLT configurations [here](https://dlthub.com/docs/general-usage/credentials). Define the source of the data. ```python import dlt from dlt.destinations.qdrant import qdrant_adapter movies = [ { "title": "Blade Runner", "year": 1982, "description": "The film is about a dystopian vision of the future that combines noir elements with sci-fi imagery." }, { "title": "Ghost in the Shell", "year": 1995, "description": "The film is about a cyborg policewoman and her partner who set out to find the main culprit behind brain hacking, the Puppet Master." }, { "title": "The Matrix", "year": 1999, "description": "The movie is set in the 22nd century and tells the story of a computer hacker who joins an underground group fighting the powerful computers that rule the earth." } ] ``` <aside role="status"> A more comprehensive pipeline would load data from some API or use one of <a href="https://dlthub.com/docs/dlt-ecosystem/verified-sources">DLT's verified sources</a>. </aside> Define the pipeline. ```python pipeline = dlt.pipeline( pipeline_name="movies", destination="qdrant", dataset_name="movies_dataset", ) ``` Run the pipeline. ```python info = pipeline.run( qdrant_adapter( movies, embed=["title", "description"] ) ) ``` The data is now loaded into Qdrant. To use vector search after the data has been loaded, you must specify which fields Qdrant needs to generate embeddings for. You do that by wrapping the data (or [DLT resource](https://dlthub.com/docs/general-usage/resource)) with the `qdrant_adapter` function. ## Write disposition A DLT [write disposition](https://dlthub.com/docs/dlt-ecosystem/destinations/qdrant/#write-disposition) defines how the data should be written to the destination. All write dispositions are supported by the Qdrant destination. ## DLT Sync Qdrant destination supports syncing of the [`DLT` state](https://dlthub.com/docs/general-usage/state#syncing-state-with-destination). ## Next steps - The comprehensive Qdrant DLT destination documentation can be found [here](https://dlthub.com/docs/dlt-ecosystem/destinations/qdrant/). - [Source Code](https://github.com/dlt-hub/dlt/tree/devel/dlt/destinations/impl/qdrant)
documentation/data-management/dlt.md
--- title: Apache Airflow aliases: [ ../frameworks/airflow/ ] --- # Apache Airflow [Apache Airflow](https://airflow.apache.org/) is an open-source platform for authoring, scheduling and monitoring data and computing workflows. Airflow uses Python to create workflows that can be easily scheduled and monitored. Qdrant is available as a [provider](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/index.html) in Airflow to interface with the database. ## Prerequisites Before configuring Airflow, you need: 1. A Qdrant instance to connect to. You can set one up in our [installation guide](/documentation/guides/installation/). 2. A running Airflow instance. You can use their [Quick Start Guide](https://airflow.apache.org/docs/apache-airflow/stable/start.html). ## Installation You can install the Qdrant provider by running `pip install apache-airflow-providers-qdrant` in your Airflow shell. **NOTE**: You'll have to restart your Airflow session for the provider to be available. ## Setting up a connection Open the `Admin-> Connections` section of the Airflow UI. Click the `Create` link to create a new [Qdrant connection](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/connections.html). ![Qdrant connection](/documentation/frameworks/airflow/connection.png) You can also set up a connection using [environment variables](https://airflow.apache.org/docs/apache-airflow/stable/howto/connection.html#environment-variables-connections) or an [external secret backend](https://airflow.apache.org/docs/apache-airflow/stable/security/secrets/secrets-backend/index.html). ## Qdrant hook An Airflow hook is an abstraction of a specific API that allows Airflow to interact with an external system. ```python from airflow.providers.qdrant.hooks.qdrant import QdrantHook hook = QdrantHook(conn_id="qdrant_connection") hook.verify_connection() ``` A [`qdrant_client#QdrantClient`](https://pypi.org/project/qdrant-client/) instance is available via `@property conn` of the `QdrantHook` instance for use within your Airflow workflows. ```python from qdrant_client import models hook.conn.count("<COLLECTION_NAME>") hook.conn.upsert( "<COLLECTION_NAME>", points=[ models.PointStruct(id=32, vector=[0.32, 0.12, 0.123], payload={"color": "red"}) ], ) ``` ## Qdrant Ingest Operator The Qdrant provider also provides a convenience operator for uploading data to a Qdrant collection that internally uses the Qdrant hook. ```python from airflow.providers.qdrant.operators.qdrant import QdrantIngestOperator vectors = [ [0.11, 0.22, 0.33, 0.44], [0.55, 0.66, 0.77, 0.88], [0.88, 0.11, 0.12, 0.13], ] ids = [32, 21, "b626f6a9-b14d-4af9-b7c3-43d8deb719a6"] payload = [{"meta": "data"}, {"meta": "data_2"}, {"meta": "data_3", "extra": "data"}] QdrantIngestOperator( conn_id="qdrant_connection", task_id="qdrant_ingest", collection_name="<COLLECTION_NAME>", vectors=vectors, ids=ids, payload=payload, ) ``` ## Reference - 📦 [Provider package PyPI](https://pypi.org/project/apache-airflow-providers-qdrant/) - 📚 [Provider docs](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/index.html) - 📄 [Source Code](https://github.com/apache/airflow/tree/main/airflow/providers/qdrant)
documentation/data-management/airflow.md
--- title: MindsDB aliases: [ ../integrations/mindsdb/, ../frameworks/mindsdb/ ] --- # MindsDB [MindsDB](https://mindsdb.com) is an AI automation platform for building AI/ML powered features and applications. It works by connecting any source of data with any AI/ML model or framework and automating how real-time data flows between them. With the MindsDB-Qdrant integration, you can now select Qdrant as a database to load into and retrieve from with semantic search and filtering. **MindsDB allows you to easily**: - Connect to any store of data or end-user application. - Pass data to an AI model from any store of data or end-user application. - Plug the output of an AI model into any store of data or end-user application. - Fully automate these workflows to build AI-powered features and applications ## Usage To get started with Qdrant and MindsDB, the following syntax can be used. ```sql CREATE DATABASE qdrant_test WITH ENGINE = "qdrant", PARAMETERS = { "location": ":memory:", "collection_config": { "size": 386, "distance": "Cosine" } } ``` The available arguments for instantiating Qdrant can be found [here](https://github.com/mindsdb/mindsdb/blob/23a509cb26bacae9cc22475497b8644e3f3e23c3/mindsdb/integrations/handlers/qdrant_handler/qdrant_handler.py#L408-L468). ## Creating a new table - Qdrant options for creating a collection can be specified as `collection_config` in the `CREATE DATABASE` parameters. - By default, UUIDs are set as collection IDs. You can provide your own IDs under the `id` column. ```sql CREATE TABLE qdrant_test.test_table ( SELECT embeddings,'{"source": "bbc"}' as metadata FROM mysql_demo_db.test_embeddings ); ``` ## Querying the database #### Perform a full retrieval using the following syntax. ```sql SELECT * FROM qdrant_test.test_table ``` By default, the `LIMIT` is set to 10 and the `OFFSET` is set to 0. #### Perform a similarity search using your embeddings <aside role="status">Qdrant supports <a href="/documentation/concepts/indexing/#payload-index">payload indexing</a> that vastly improves retrieval efficiency with filters and is highly recommended. Please note that this feature currently cannot be configured via MindsDB and must be set up separately if needed.</aside> ```sql SELECT * FROM qdrant_test.test_table WHERE search_vector = (select embeddings from mysql_demo_db.test_embeddings limit 1) ``` #### Perform a search using filters ```sql SELECT * FROM qdrant_test.test_table WHERE `metadata.source` = 'bbc'; ``` #### Delete entries using IDs ```sql DELETE FROM qtest.test_table_6 WHERE id = 2 ``` #### Delete entries using filters ```sql DELETE * FROM qdrant_test.test_table WHERE `metadata.source` = 'bbc'; ``` #### Drop a table ```sql DROP TABLE qdrant_test.test_table; ``` ## Next steps - You can find more information pertaining to MindsDB and its datasources [here](https://docs.mindsdb.com/). - [Source Code](https://github.com/mindsdb/mindsdb/tree/main/mindsdb/integrations/handlers/qdrant_handler)
documentation/data-management/mindsdb.md
--- title: Apache NiFi aliases: [ ../frameworks/nifi/ ] --- # Apache NiFi [NiFi](https://nifi.apache.org/) is a real-time data ingestion platform, which can transfer and manage data transfer between numerous sources and destination systems. It supports many protocols and offers a web-based user interface for developing and monitoring data flows. NiFi supports ingesting and querying data in Qdrant via its processor modules. ## Configuration ![NiFi Qdrant configuration](/documentation/frameworks/nifi/nifi-conifg.png) You can configure Qdrant NiFi processors with your Qdrant credentials, query/upload configurations. The processors offer 2 built-in embedding providers to encode data into vector embeddings - HuggingFace, OpenAI. ## Put Qdrant ![NiFI Put Qdrant](/documentation/frameworks/nifi/nifi-put-qdrant.png) The `Put Qdrant` processor can ingest NiFi [FlowFile](https://nifi.apache.org/docs/nifi-docs/html/nifi-in-depth.html#intro) data into a Qdrant collection. ## Query Qdrant ![NiFI Query Qdrant](/documentation/frameworks/nifi/nifi-query-qdrant.png) The `Query Qdrant` processor can perform a similarity search across a Qdrant collection and return a [FlowFile](https://nifi.apache.org/docs/nifi-docs/html/nifi-in-depth.html#intro) result. ## Further Reading - [NiFi Documentation](https://nifi.apache.org/documentation/v2/). - [Source Code](https://github.com/apache/nifi-python-extensions)
documentation/data-management/nifi.md
--- title: InfinyOn Fluvio --- ![Fluvio Logo](/documentation/data-management/fluvio/fluvio-logo.png) [InfinyOn Fluvio](https://www.fluvio.io/) is an open-source platform written in Rust for high speed, real-time data processing. It is cloud native, designed to work with any infrastructure type, from bare metal hardware to containerized platforms. ## Usage with Qdrant With the [Qdrant Fluvio Connector](https://github.com/qdrant/qdrant-fluvio), you can stream records from Fluvio topics to Qdrant collections, leveraging Fluvio's delivery guarantees and high-throughput. ### Pre-requisites - A Fluvio installation. You can refer to the [Fluvio Quickstart](https://www.fluvio.io/docs/fluvio/quickstart/) for instructions. - Qdrant server to connect to. You can set up a [local instance](/documentation/quickstart/) or a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/). ### Downloading the connector Run the following commands after [setting up Fluvio](https://www.fluvio.io/docs/fluvio/quickstart). ```console cdk hub download qdrant/qdrant-sink@0.1.0 ``` ### Example Config > _config.yaml_ ```yaml apiVersion: 0.1.0 meta: version: 0.1.0 name: my-qdrant-connector type: qdrant-sink topic: topic-name secrets: - name: QDRANT_API_KEY qdrant: url: https://xyz-example.eu-central.aws.cloud.qdrant.io:6334 api_key: "${{ secrets.QDRANT_API_KEY }}" ``` > _secrets.txt_ ```text QDRANT_API_KEY=<SOME_API_KEY> ``` ### Running ```console cdk deploy start --ipkg qdrant-qdrant-sink-0.1.0.ipkg -c config.yaml --secrets secrets.txt ``` ### Produce Messages You can now run the following to generate messages to be written into Qdrant. ```console fluvio produce topic-name ``` ### Message Formats This sink connector supports messages with dense/sparse/multi vectors. _Click each to expand._ <details> <summary><b>Unnamed/Default vector</b></summary> Reference: [Creating a collection with a default vector](https://qdrant.tech/documentation/concepts/collections/#create-a-collection). ```json { "collection_name": "{collection_name}", "id": 1, "vector": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 ], "payload": { "name": "fluvio", "description": "Solution for distributed stream processing", "url": "https://www.fluvio.io/" } } ``` </details> <details> <summary><b>Named multiple vectors</b></summary> Reference: [Creating a collection with multiple vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors). ```json { "collection_name": "{collection_name}", "id": 1, "vector": { "some-dense": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 ], "some-other-dense": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 ] }, "payload": { "name": "fluvio", "description": "Solution for distributed stream processing", "url": "https://www.fluvio.io/" } } ``` </details> <details> <summary><b>Sparse vectors</b></summary> Reference: [Creating a collection with sparse vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-sparse-vectors). ```json { "collection_name": "{collection_name}", "id": 1, "vector": { "some-sparse": { "indices": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ], "values": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 ] } }, "payload": { "name": "fluvio", "description": "Solution for distributed stream processing", "url": "https://www.fluvio.io/" } } ``` </details> <details> <summary><b>Multi-vector</b></summary> ```json { "collection_name": "{collection_name}", "id": 1, "vector": { "some-multi": [ [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 ], [ 1.0, 0.9, 0.8, 0.5, 0.4, 0.8, 0.6, 0.4, 0.2, 0.1 ] ] }, "payload": { "name": "fluvio", "description": "Solution for distributed stream processing", "url": "https://www.fluvio.io/" } } ``` </details> <details> <summary><b>Combination of named dense and sparse vectors</b></summary> Reference: - [Creating a collection with multiple vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors). - [Creating a collection with sparse vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-sparse-vectors). ```json { "collection_name": "{collection_name}", "id": "a10435b5-2a58-427a-a3a0-a5d845b147b7", "vector": { "some-other-dense": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 ], "some-sparse": { "indices": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ], "values": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 ] } }, "payload": { "name": "fluvio", "description": "Solution for distributed stream processing", "url": "https://www.fluvio.io/" } } ``` </details> ### Further Reading - [Fluvio Quickstart](https://www.fluvio.io/docs/fluvio/quickstart) - [Fluvio Tutorials](https://www.fluvio.io/docs/fluvio/tutorials/) - [Connector Source](https://github.com/qdrant/qdrant-fluvio)
documentation/data-management/fluvio.md
--- title: Unstructured aliases: [ ../frameworks/unstructured/ ] --- # Unstructured [Unstructured](https://unstructured.io/) is a library designed to help preprocess, structure unstructured text documents for downstream machine learning tasks. Qdrant can be used as an ingestion destination in Unstructured. ## Setup Install Unstructured with the `qdrant` extra. ```bash pip install "unstructured[qdrant]" ``` ## Usage Depending on the use case you can prefer the command line or using it within your application. ### CLI ```bash EMBEDDING_PROVIDER=${EMBEDDING_PROVIDER:-"langchain-huggingface"} unstructured-ingest \ local \ --input-path example-docs/book-war-and-peace-1225p.txt \ --output-dir local-output-to-qdrant \ --strategy fast \ --chunk-elements \ --embedding-provider "$EMBEDDING_PROVIDER" \ --num-processes 2 \ --verbose \ qdrant \ --collection-name "test" \ --url "http://localhost:6333" \ --batch-size 80 ``` For a full list of the options the CLI accepts, run `unstructured-ingest <upstream connector> qdrant --help` ### Programmatic usage ```python from unstructured.ingest.connector.local import SimpleLocalConfig from unstructured.ingest.connector.qdrant import ( QdrantWriteConfig, SimpleQdrantConfig, ) from unstructured.ingest.interfaces import ( ChunkingConfig, EmbeddingConfig, PartitionConfig, ProcessorConfig, ReadConfig, ) from unstructured.ingest.runner import LocalRunner from unstructured.ingest.runner.writers.base_writer import Writer from unstructured.ingest.runner.writers.qdrant import QdrantWriter def get_writer() -> Writer: return QdrantWriter( connector_config=SimpleQdrantConfig( url="http://localhost:6333", collection_name="test", ), write_config=QdrantWriteConfig(batch_size=80), ) if __name__ == "__main__": writer = get_writer() runner = LocalRunner( processor_config=ProcessorConfig( verbose=True, output_dir="local-output-to-qdrant", num_processes=2, ), connector_config=SimpleLocalConfig( input_path="example-docs/book-war-and-peace-1225p.txt", ), read_config=ReadConfig(), partition_config=PartitionConfig(), chunking_config=ChunkingConfig(chunk_elements=True), embedding_config=EmbeddingConfig(provider="langchain-huggingface"), writer=writer, writer_kwargs={}, ) runner.run() ``` ## Next steps - Unstructured API [reference](https://unstructured-io.github.io/unstructured/api.html). - Qdrant ingestion destination [reference](https://unstructured-io.github.io/unstructured/ingest/destination_connectors/qdrant.html). - [Source Code](https://github.com/Unstructured-IO/unstructured/blob/main/unstructured/ingest/connector/qdrant.py)
documentation/data-management/unstructured.md
--- title: Data Management weight: 15 --- ## Data Management Integrations | Integration | Description | | ------------------------------- | -------------------------------------------------------------------------------------------------- | | [Airbyte](./airbyte/) | Data integration platform specialising in ELT pipelines. | | [Airflow](./airflow/) | Platform designed for developing, scheduling, and monitoring batch-oriented workflows. | | [Connect](./redpanda/) | Declarative data-agnostic streaming service for efficient, stateless processing. | | [Confluent](./confluent/) | Fully-managed data streaming platform with a cloud-native Apache Kafka engine. | | [DLT](./dlt/) | Python library to simplify data loading processes between several sources and destinations. | | [Fluvio](./fluvio/) | Rust-based platform for high speed, real-time data processing. | | [Fondant](./fondant/) | Framework for developing datasets, sharing reusable operations and data processing trees. | | [MindsDB](./mindsdb/) | Platform to deploy, serve, and fine-tune models with numerous data source integrations. | | [NiFi](./nifi/) | Data ingestion platform to manage data transfer between different sources and destination systems. | | [Spark](./spark/) | A unified analytics engine for large-scale data processing. | | [Unstructured](./unstructured/) | Python library with components for ingesting and pre-processing data from numerous sources. |
documentation/data-management/_index.md
--- title: Fondant aliases: [ ../integrations/fondant/, ../frameworks/fondant/ ] --- # Fondant [Fondant](https://fondant.ai/en/stable/) is an open-source framework that aims to simplify and speed up large-scale data processing by making containerized components reusable across pipelines and execution environments. Benefit from built-in features such as autoscaling, data lineage, and pipeline caching, and deploy to (managed) platforms such as Vertex AI, Sagemaker, and Kubeflow Pipelines. Fondant comes with a library of reusable components that you can leverage to compose your own pipeline, including a Qdrant component for writing embeddings to Qdrant. ## Usage <aside role="status"> A Qdrant collection has to be <a href="/documentation/concepts/collections/">created in advance</a> </aside> **A data load pipeline for RAG using Qdrant**. A simple ingestion pipeline could look like the following: ```python import pyarrow as pa from fondant.pipeline import Pipeline indexing_pipeline = Pipeline( name="ingestion-pipeline", description="Pipeline to prepare and process data for building a RAG solution", base_path="./fondant-artifacts", ) # An custom implemenation of a read component. text = indexing_pipeline.read( "path/to/data-source-component", arguments={ # your custom arguments } ) chunks = text.apply( "chunk_text", arguments={ "chunk_size": 512, "chunk_overlap": 32, }, ) embeddings = chunks.apply( "embed_text", arguments={ "model_provider": "huggingface", "model": "all-MiniLM-L6-v2", }, ) embeddings.write( "index_qdrant", arguments={ "url": "http:localhost:6333", "collection_name": "some-collection-name", }, cache=False, ) ``` Once you have a pipeline, you can easily run it using the built-in CLI. Fondant allows you to run the pipeline in production across different clouds. The first component is a custom read module that needs to be implemented and cannot be used off the shelf. A detailed tutorial on how to rebuild this pipeline [is provided on GitHub](https://github.com/ml6team/fondant-usecase-RAG/tree/main). ## Next steps More information about creating your own pipelines and components can be found in the [Fondant documentation](https://fondant.ai/en/stable/).
documentation/data-management/fondant.md
--- title: Working with ColBERT weight: 6 --- # How to Generate ColBERT Multivectors with FastEmbed With FastEmbed, you can use ColBERT to generate multivector embeddings. ColBERT is a powerful retrieval model that combines the strength of BERT embeddings with efficient late interaction techniques. FastEmbed will provide you with an optimized pipeline to utilize these embeddings in your search tasks. Please note that ColBERT requires more resources than other no-interaction models. We recommend you use ColBERT as a re-ranker instead of a first-stage retriever. The first-stage retriever can retrieve 100-500 examples. This task would be done by a simpler model. Then, you can rank the leftover results with ColBERT. ## Setup This command imports all late interaction models for text embedding. ```python from fastembed import LateInteractionTextEmbedding ``` You can list which models are supported in your version of FastEmbed. ```python LateInteractionTextEmbedding.list_supported_models() ``` This command displays the available models. The output shows details about the ColBERT model, including its dimensions, description, size, sources, and model file. ```python [{'model': 'colbert-ir/colbertv2.0', 'dim': 128, 'description': 'Late interaction model', 'size_in_GB': 0.44, 'sources': {'hf': 'colbert-ir/colbertv2.0'}, 'model_file': 'model.onnx'}] ``` Now, load the model. ```python embedding_model = LateInteractionTextEmbedding("colbert-ir/colbertv2.0") ``` The model files will be fetched and downloaded, with progress showing. ## Embed data First, you need to define both documents and queries. ```python documents = [ "ColBERT is a late interaction text embedding model, however, there are also other models such as TwinBERT.", "On the contrary to the late interaction models, the early interaction models contains interaction steps at embedding generation process", ] queries = [ "Are there any other late interaction text embedding models except ColBERT?", "What is the difference between late interaction and early interaction text embedding models?", ] ``` **Note:** ColBERT computes document and query embeddings differently. Make sure to use the corresponding methods. Now, create embeddings from both documents and queries. ```python document_embeddings = list( embedding_model.embed(documents) ) # embed and qury_embed return generators, # which we need to evaluate by writing them to a list query_embeddings = list(embedding_model.query_embed(queries)) ``` Display the shapes of document and query embeddings. ```python document_embeddings[0].shape, query_embeddings[0].shape ``` You should get something like this: ```python ((26, 128), (32, 128)) ``` Don't worry about query embeddings having the bigger shape in this case. ColBERT authors recommend to pad queries with [MASK] tokens to 32 tokens. They also recommend truncating queries to 32 tokens, however, we don't do that in FastEmbed so that you can put some straight into the queries. ## Compute similarity This function calculates the relevance scores using the MaxSim operator, sorts the documents based on these scores, and returns the indices of the top-k documents. ```python import numpy as np def compute_relevance_scores(query_embedding: np.array, document_embeddings: np.array, k: int): """ Compute relevance scores for top-k documents given a query. :param query_embedding: Numpy array representing the query embedding, shape: [num_query_terms, embedding_dim] :param document_embeddings: Numpy array representing embeddings for documents, shape: [num_documents, max_doc_length, embedding_dim] :param k: Number of top documents to return :return: Indices of the top-k documents based on their relevance scores """ # Compute batch dot-product of query_embedding and document_embeddings # Resulting shape: [num_documents, num_query_terms, max_doc_length] scores = np.matmul(query_embedding, document_embeddings.transpose(0, 2, 1)) # Apply max-pooling across document terms (axis=2) to find the max similarity per query term # Shape after max-pool: [num_documents, num_query_terms] max_scores_per_query_term = np.max(scores, axis=2) # Sum the scores across query terms to get the total score for each document # Shape after sum: [num_documents] total_scores = np.sum(max_scores_per_query_term, axis=1) # Sort the documents based on their total scores and get the indices of the top-k documents sorted_indices = np.argsort(total_scores)[::-1][:k] return sorted_indices ``` Calculate sorted indices. ```python sorted_indices = compute_relevance_scores( np.array(query_embeddings[0]), np.array(document_embeddings), k=3 ) print("Sorted document indices:", sorted_indices) ``` The output shows the sorted document indices based on the relevance to the query. ```python Sorted document indices: [0 1] ``` ## Show results ```python print(f"Query: {queries[0]}") for index in sorted_indices: print(f"Document: {documents[index]}") ``` The query and corresponding sorted documents are displayed, showing the relevance of each document to the query. ```bash Query: Are there any other late interaction text embedding models except ColBERT? Document: ColBERT is a late interaction text embedding model, however, there are also other models such as TwinBERT. Document: On the contrary to the late interaction models, the early interaction models contains interaction steps at embedding generation process ```
documentation/fastembed/fastembed-colbert.md
--- title: Working with SPLADE weight: 5 --- # How to Generate Sparse Vectors with SPLADE SPLADE is a novel method for learning sparse text representation vectors, outperforming BM25 in tasks like information retrieval and document classification. Its main advantage is generating efficient and interpretable sparse vectors, making it effective for large-scale text data. ## Setup First, install FastEmbed. ```python pip install -q fastembed ``` Next, import the required modules for sparse embeddings and Python’s typing module. ```python from fastembed import SparseTextEmbedding, SparseEmbedding from typing import List ``` You may always check the list of all supported sparse embedding models. ```python SparseTextEmbedding.list_supported_models() ``` This will return a list of models, each with its details such as model name, vocabulary size, description, and sources. ```python [{'model': 'prithivida/Splade_PP_en_v1', 'vocab_size': 30522, 'description': 'Independent Implementation of SPLADE++ Model for English', 'size_in_GB': 0.532, 'sources': {'hf': 'Qdrant/SPLADE_PP_en_v1'}}] ``` Now, load the model. ```python model_name = "prithvida/Splade_PP_en_v1" # This triggers the model download model = SparseTextEmbedding(model_name=model_name) ``` ## Embed data You need to define a list of documents to be embedded. ```python documents: List[str] = [ "Chandrayaan-3 is India's third lunar mission", "It aimed to land a rover on the Moon's surface - joining the US, China and Russia", "The mission is a follow-up to Chandrayaan-2, which had partial success", "Chandrayaan-3 will be launched by the Indian Space Research Organisation (ISRO)", "The estimated cost of the mission is around $35 million", "It will carry instruments to study the lunar surface and atmosphere", "Chandrayaan-3 landed on the Moon's surface on 23rd August 2023", "It consists of a lander named Vikram and a rover named Pragyan similar to Chandrayaan-2. Its propulsion module would act like an orbiter.", "The propulsion module carries the lander and rover configuration until the spacecraft is in a 100-kilometre (62 mi) lunar orbit", "The mission used GSLV Mk III rocket for its launch", "Chandrayaan-3 was launched from the Satish Dhawan Space Centre in Sriharikota", "Chandrayaan-3 was launched earlier in the year 2023", ] ``` Then, generate sparse embeddings for each document. Here,`batch_size` is optional and helps to process documents in batches. ```python sparse_embeddings_list: List[SparseEmbedding] = list( model.embed(documents, batch_size=6) ) ``` ## Retrieve embeddings `sparse_embeddings_list` contains sparse embeddings for the documents provided earlier. Each element in this list is a `SparseEmbedding` object that contains the sparse vector representation of a document. ```python index = 0 sparse_embeddings_list[index] ``` This output is a `SparseEmbedding` object for the first document in our list. It contains two arrays: `values` and `indices`. - The `values` array represents the weights of the features (tokens) in the document. - The `indices` array represents the indices of these features in the model's vocabulary. Each pair of corresponding `values` and `indices` represents a token and its weight in the document. ```python SparseEmbedding(values=array([0.05297208, 0.01963477, 0.36459631, 1.38508618, 0.71776593, 0.12667948, 0.46230844, 0.446771 , 0.26897505, 1.01519883, 1.5655334 , 0.29412213, 1.53102326, 0.59785569, 1.1001817 , 0.02079751, 0.09955651, 0.44249091, 0.09747757, 1.53519952, 1.36765671, 0.15740395, 0.49882549, 0.38629025, 0.76612782, 1.25805044, 0.39058095, 0.27236196, 0.45152301, 0.48262018, 0.26085234, 1.35912788, 0.70710695, 1.71639752]), indices=array([ 1010, 1011, 1016, 1017, 2001, 2018, 2034, 2093, 2117, 2319, 2353, 2509, 2634, 2686, 2796, 2817, 2922, 2959, 3003, 3148, 3260, 3390, 3462, 3523, 3822, 4231, 4316, 4774, 5590, 5871, 6416, 11926, 12076, 16469])) ``` ## Examine weights Now, print the first 5 features and their weights for better understanding. ```python for i in range(5): print(f"Token at index {sparse_embeddings_list[0].indices[i]} has weight {sparse_embeddings_list[0].values[i]}") ``` The output will display the token indices and their corresponding weights for the first document. ```python Token at index 1010 has weight 0.05297207832336426 Token at index 1011 has weight 0.01963476650416851 Token at index 1016 has weight 0.36459630727767944 Token at index 1017 has weight 1.385086178779602 Token at index 2001 has weight 0.7177659273147583 ``` ## Analyze results Let's use the tokenizer vocab to make sense of these indices. ```python import json from tokenizers import Tokenizer tokenizer = Tokenizer.from_pretrained(SparseTextEmbedding.list_supported_models()[0]["sources"]["hf"]) ``` The `get_tokens_and_weights` function takes a `SparseEmbedding` object and a `tokenizer` as input. It will construct a dictionary where the keys are the decoded tokens, and the values are their corresponding weights. ```python def get_tokens_and_weights(sparse_embedding, tokenizer): token_weight_dict = {} for i in range(len(sparse_embedding.indices)): token = tokenizer.decode([sparse_embedding.indices[i]]) weight = sparse_embedding.values[i] token_weight_dict[token] = weight # Sort the dictionary by weights token_weight_dict = dict(sorted(token_weight_dict.items(), key=lambda item: item[1], reverse=True)) return token_weight_dict # Test the function with the first SparseEmbedding print(json.dumps(get_tokens_and_weights(sparse_embeddings_list[index], tokenizer), indent=4)) ``` ## Dictionary output The dictionary is then sorted by weights in descending order. ```python { "chandra": 1.7163975238800049, "third": 1.5655333995819092, "##ya": 1.535199522972107, "india": 1.5310232639312744, "3": 1.385086178779602, "mission": 1.3676567077636719, "lunar": 1.3591278791427612, "moon": 1.2580504417419434, "indian": 1.1001816987991333, "##an": 1.015198826789856, "3rd": 0.7661278247833252, "was": 0.7177659273147583, "spacecraft": 0.7071069478988647, "space": 0.5978556871414185, "flight": 0.4988254904747009, "satellite": 0.4826201796531677, "first": 0.46230843663215637, "expedition": 0.4515230059623718, "three": 0.4467709958553314, "fourth": 0.44249090552330017, "vehicle": 0.390580952167511, "iii": 0.3862902522087097, "2": 0.36459630727767944, "##3": 0.2941221296787262, "planet": 0.27236196398735046, "second": 0.26897504925727844, "missions": 0.2608523368835449, "launched": 0.15740394592285156, "had": 0.12667948007583618, "largest": 0.09955651313066483, "leader": 0.09747757017612457, ",": 0.05297207832336426, "study": 0.02079751156270504, "-": 0.01963476650416851 } ``` ## Observations - The relative order of importance is quite useful. The most important tokens in the sentence have the highest weights. - **Term Expansion:** The model can expand the terms in the document. This means that the model can generate weights for tokens that are not present in the document but are related to the tokens in the document. This is a powerful feature that allows the model to capture the context of the document. Here, you'll see that the model has added the tokens '3' from 'third' and 'moon' from 'lunar' to the sparse vector. ## Design choices - The weights are not normalized. This means that the sum of the weights is not 1 or 100. This is a common practice in sparse embeddings, as it allows the model to capture the importance of each token in the document. - Tokens are included in the sparse vector only if they are present in the model's vocabulary. This means that the model will not generate a weight for tokens that it has not seen during training. - Tokens do not map to words directly -- allowing you to gracefully handle typo errors and out-of-vocabulary tokens.
documentation/fastembed/fastembed-splade.md
--- title: "FastEmbed & Qdrant" weight: 3 --- # Using FastEmbed with Qdrant for Vector Search ## Install Qdrant Client ```python pip install qdrant-client ``` ## Install FastEmbed Installing FastEmbed will let you quickly turn data to vectors, so that Qdrant can search over them. ```python pip install fastembed ``` ## Initialize the client Qdrant Client has a simple in-memory mode that lets you try semantic search locally. ```python from qdrant_client import QdrantClient client = QdrantClient(":memory:") # Qdrant is running from RAM. ``` ## Add data Now you can add two sample documents, their associated metadata, and a point `id` for each. ```python docs = ["Qdrant has a LangChain integration for chatbots.", "Qdrant has a LlamaIndex integration for agents."] metadata = [ {"source": "langchain-docs"}, {"source": "llamaindex-docs"}, ] ids = [42, 2] ``` ## Load data to a collection Create a test collection and upsert your two documents to it. ```python client.add( collection_name="test_collection", documents=docs, metadata=metadata, ids=ids ) ``` ## Run vector search Here, you will ask a dummy question that will allow you to retrieve a semantically relevant result. ```python search_result = client.query( collection_name="test_collection", query_text="Which integration is best for agents?" ) print(search_result) ``` The semantic search engine will retrieve the most similar result in order of relevance. In this case, the second statement about LlamaIndex is more relevant. ```bash [QueryResponse(id=2, embedding=None, sparse_embedding=None, metadata={'document': 'Qdrant has a LlamaIndex integration for agents', 'source': 'llamaindex-docs'}, document='Qdrant has a LlamaIndex integration for agents.', score=0.8749180370667156), QueryResponse(id=42, embedding=None, sparse_embedding=None, metadata={'document': 'Qdrant has a LangChain integration for chatbots.', 'source': 'langchain-docs'}, document='Qdrant has a LangChain integration for chatbots.', score=0.8351846822959111)] ```
documentation/fastembed/fastembed-semantic-search.md
--- title: "Quickstart" weight: 2 --- # How to Generate Text Embedings with FastEmbed ## Install FastEmbed ```python pip install fastembed ``` Just for demo purposes, you will use Lists and NumPy to work with sample data. ```python from typing import List import numpy as np ``` ## Load default model In this example, you will use the default text embedding model, `BAAI/bge-small-en-v1.5`. ```python from fastembed import TextEmbedding ``` ## Add sample data Now, add two sample documents. Your documents must be in a list, and each document must be a string ```python documents: List[str] = [ "FastEmbed is lighter than Transformers & Sentence-Transformers.", "FastEmbed is supported by and maintained by Qdrant.", ] ``` Download and initialize the model. Print a message to verify the process. ```python embedding_model = TextEmbedding() print("The model BAAI/bge-small-en-v1.5 is ready to use.") ``` ## Embed data Generate embeddings for both documents. ```python embeddings_generator = embedding_model.embed(documents) embeddings_list = list(embeddings_generator) len(embeddings_list[0]) ``` Here is the sample document list. The default model creates vectors with 384 dimensions. ```bash Document: This is built to be faster and lighter than other embedding libraries e.g. Transformers, Sentence-Transformers, etc. Vector of type: <class 'numpy.ndarray'> with shape: (384,) Document: fastembed is supported by and maintained by Qdrant. Vector of type: <class 'numpy.ndarray'> with shape: (384,) ``` ## Visualize embeddings ```python print("Embeddings:\n", embeddings_list) ``` The embeddings don't look too interesting, but here is a visual. ```bash Embeddings: [[-0.11154681 0.00976555 0.00524559 0.01951888 -0.01934952 0.02943449 -0.10519084 -0.00890122 0.01831438 0.01486796 -0.05642502 0.02561352 -0.00120165 0.00637456 0.02633459 0.0089221 0.05313658 0.03955453 -0.04400245 -0.02929407 0.04691846 -0.02515868 0.00778646 -0.05410657 ... -0.00243012 -0.01820582 0.02938612 0.02108984 -0.02178085 0.02971899 -0.00790564 0.03561783 0.0652488 -0.04371546 -0.05550042 0.02651665 -0.01116153 -0.01682246 -0.05976734 -0.03143916 0.06522726 0.01801389 -0.02611006 0.01627177 -0.0368538 0.03968835 0.027597 0.03305927]] ```
documentation/fastembed/fastembed-quickstart.md
--- title: "FastEmbed" weight: 6 --- # What is FastEmbed? FastEmbed is a lightweight Python library built for embedding generation. It supports popular embedding models and offers a user-friendly experience for embedding data into vector space. By using FastEmbed, you can ensure that your embedding generation process is not only fast and efficient but also highly accurate, meeting the needs of various machine learning and natural language processing applications. FastEmbed easily integrates with Qdrant for a variety of multimodal search purposes. ## How to get started with FastEmbed |Beginner|Advanced| |:-:|:-:| |[Generate Text Embedings with FastEmbed](fastembed-quickstart/)|[Combine FastEmbed with Qdrant for Vector Search](fastembed-semantic-search/)| ## Why is FastEmbed useful? - Light: Unlike other inference frameworks, such as PyTorch, FastEmbed requires very little external dependencies. Because it uses the ONNX runtime, it is perfect for serverless environments like AWS Lambda. - Fast: By using ONNX, FastEmbed ensures high-performance inference across various hardware platforms. - Accurate: FastEmbed aims for better accuracy and recall than models like OpenAI’s `Ada-002`. It always uses model which demonstrate strong results on the MTEB leaderboard. - Support: FastEmbed supports a wide range of models, including multilingual ones, to meet diverse use case needs.
documentation/fastembed/_index.md
--- title: OpenLIT weight: 3100 aliases: [ ../frameworks/openlit/ ] --- # OpenLIT [OpenLIT](https://github.com/openlit/openlit) is an OpenTelemetry-native LLM Application Observability tool and includes OpenTelemetry auto-instrumentation to monitor Qdrant and provide insights to improve database operations and application performance. This page assumes you're using `qdrant-client` version 1.7.3 or above. ## Usage ### Step 1: Install OpenLIT Open your command line or terminal and run: ```bash pip install openlit ``` ### Step 2: Initialize OpenLIT in your Application Integrating OpenLIT into LLM applications is straightforward with just **two lines of code**: ```python import openlit openlit.init() ``` OpenLIT directs the trace to your console by default. To forward telemetry data to an HTTP OTLP endpoint, configure the `otlp_endpoint` parameter or the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable. For OpenTelemetry backends requiring authentication, use the `otlp_headers` parameter or the `OTEL_EXPORTER_OTLP_HEADERS` environment variable with the required values. ## Further Reading With the LLM Observability data now being collected by OpenLIT, the next step is to visualize and analyze this data to get insights Qdrant's performance, behavior, and identify areas of improvement. To begin exploring your LLM Application's performance data within the OpenLIT UI, please see the [Quickstart Guide](https://docs.openlit.io/latest/quickstart). If you want to integrate and send the generated metrics and traces to your existing observability tools like Promethues+Jaeger, Grafana or more, refer to the [Official Documentation for OpenLIT Connections](https://docs.openlit.io/latest/connections/intro) for detailed instructions.
documentation/observability/openlit.md
--- title: Datadog --- ![Datadog Cover](/documentation/observability/datadog/datadog-cover.jpg) [Datadog](https://www.datadoghq.com/) is a cloud-based monitoring and analytics platform that offers real-time monitoring of servers, databases, and numerous other tools and services. It provides visibility into the performance of applications and enables businesses to detect issues before they affect users. You can install the [Qdrant integration](https://docs.datadoghq.com/integrations/qdrant/) to get real-time metrics to monitor your Qdrant deployment within Datadog including: - The performance of REST and gRPC interfaces with metrics such as total requests, total failures, and time to serve to identify potential bottlenecks and mitigate them. - Information about the readiness of the cluster, and deployment (total peers, pending operations, etc.) to gain insights into your Qdrant deployment. ### Usage - With the [Datadog Agent installed](https://docs.datadoghq.com/agent/basic_agent_usage), run the following command to add the Qdrant integration: ```shell datadog-agent integration install -t qdrant==1.0.0 ``` - Edit the `qdrant.d/conf.yaml` file in the `conf.d/` folder at the root of your [Agent's configuration directory](https://docs.datadoghq.com/agent/guide/agent-configuration-files/#agent-configuration-directory) to start collecting your [Qdrant metrics](/documentation/guides/monitoring/). Most importantly, set the `openmetrics_endpoint` value to the `/metrics` endpoint of your Qdrant instance. ```yaml instances: ## @param openmetrics_endpoint - string - optional ## The URL exposing metrics in the OpenMetrics format. - openmetrics_endpoint: http://localhost:6333/metrics ``` If the Qdrant instance requires authentication, you can specify the token by configuring [`extra_headers`](https://github.com/DataDog/integrations-core/blob/26f9ae7660f042c43f5d771f0c937ff805cf442c/openmetrics/datadog_checks/openmetrics/data/conf.yaml.example#L553C1-L558C35). ```yaml # @param extra_headers - mapping - optional # Additional headers to send with every request. extra_headers: api-key: <QDRANT_API_KEY> ``` - Restart the Datadog agent. - You can now head over to the Datadog dashboard to view the [metrics](https://docs.datadoghq.com/integrations/qdrant/#data-collected) emitted by the Qdrant check. ## Further Reading - [Getting started with Datadog](https://docs.datadoghq.com/getting_started/) - [Qdrant integration source](https://github.com/DataDog/integrations-extras/tree/master/qdrant)
documentation/observability/datadog.md
--- title: OpenLLMetry weight: 2300 aliases: [ ../frameworks/openllmetry/ ] --- # OpenLLMetry OpenLLMetry from [Traceloop](https://www.traceloop.com/) is a set of extensions built on top of [OpenTelemetry](https://opentelemetry.io/) that gives you complete observability over your LLM application. OpenLLMetry supports instrumenting the `qdrant_client` Python library and exporting the traces to various observability platforms, as described in their [Integrations catalog](https://www.traceloop.com/docs/openllmetry/integrations/introduction#the-integrations-catalog). This page assumes you're using `qdrant-client` version 1.7.3 or above. ## Usage To set up OpenLLMetry, follow these steps: 1. Install the SDK: ```console pip install traceloop-sdk ``` 1. Instantiate the SDK: ```python from traceloop.sdk import Traceloop Traceloop.init() ``` You're now tracing your `qdrant_client` usage with OpenLLMetry! ## Without the SDK Since Traceloop provides standard OpenTelemetry instrumentations, you can use them as standalone packages. To do so, follow these steps: 1. Install the package: ```console pip install opentelemetry-instrumentation-qdrant ``` 1. Instantiate the `QdrantInstrumentor`. ```python from opentelemetry.instrumentation.qdrant import QdrantInstrumentor QdrantInstrumentor().instrument() ``` ## Further Reading - 📚 OpenLLMetry [API reference](https://www.traceloop.com/docs/api-reference/introduction) - 📄 [Source Code](https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-qdrant)
documentation/observability/openllmetry.md
--- title: Observability weight: 15 --- ## Observability Integrations | Tool | Description | | ----------------------------- | -------------------------------------------------------------------------------------- | | [OpenLIT](./openlit/) | Platform for OpenTelemetry-native Observability & Evals for LLMs and Vector Databases. | | [OpenLLMetry](./openllmetry/) | Set of OpenTelemetry extensions to add Observability for your LLM application. | | [Datadog](./datadog/) | Cloud-based monitoring and analytics platform. |
documentation/observability/_index.md