id
stringlengths 14
15
| text
stringlengths 92
1.69k
| source
stringclasses 25
values |
|---|---|---|
b55bd8ef143b-11
|
"metadata": {"genre": "comedy", "year": 2019}
},
{
"id": "D",
"values": [0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4],
"metadata": {"genre": "drama"}
},
{
"id": "E",
"values": [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
"metadata": {"genre": "drama"}
}
]
}'
|
https://docs.pinecone.io/docs/insert-data
|
b55bd8ef143b-12
|
Upserting vectors with sparse values
Sparse vector values can be upserted alongside dense vector values.
Pythoncurl index = pinecone.Index('example-index')
|
https://docs.pinecone.io/docs/insert-data
|
b55bd8ef143b-13
|
upsert_response = index.upsert(
vectors=[
{'id': 'vec1',
'values': [0.1, 0.2, 0.3, 0.4],
'metadata': {'genre': 'drama'},
'sparse_values': {
'indices': [10, 45, 16],
'values': [0.5, 0.5, 0.2]
}},
{'id': 'vec2',
'values': [0.2, 0.3, 0.4, 0.5],
'metadata': {'genre': 'action'},
'sparse_values': {
'indices': [15, 40, 11],
'values': [0.4, 0.5, 0.2]
}}
],
namespace='example-namespace'
)
curl --request POST \
--url https://index_name-project_id.svc.environment.pinecone.io/vectors/upsert \
--header 'accept: application/json' \
--header 'content-type: application/json' \
--data '
{
"vectors": [
{
"values": [
0.1,
0.2,
|
https://docs.pinecone.io/docs/insert-data
|
b55bd8ef143b-14
|
0.2,
0.3,
0.4
],
"sparseValues": {
"indices": [
10,
45,
16
],
"values": [
0.4,
0.5,
0.2
]
},
"id": "vec1"
},
{
"values": [
0.2,
0.3,
0.4,
0.5
],
"sparseValues": {
"indices": [
15,
40,
11
],
"values": [
0.4,
0.5,
0.2
]
},
"id": "vec2"
}
]
}
'
|
https://docs.pinecone.io/docs/insert-data
|
b55bd8ef143b-15
|
Limitations
The following limitations apply to upserting sparse vectors:
You cannot upsert sparse vector values without a dense vector values.
Only s1 and p1 pod types using the dotproduct metric support querying sparse vectors. There is no error at upsert time: if you attempt to query any other pod type using sparse vectors, Pinecone returns an error.
You can only upsert sparse vector values of sizes up to 1000 non-zero values.
Indexes created before February 22, 2023 do not support sparse values.
Troubleshooting index fullness errors
When upserting data, you may receive the following error:
consoleIndex is full, cannot accept data.
New upserts may fail as the capacity becomes exhausted. While your index can still serve queries, you need to scale your environment to accommodate more vectors.
To resolve this issue, you can scale your index.Updated about 1 month ago Back up indexesManage dataDid this page help you?YesNo
|
https://docs.pinecone.io/docs/insert-data
|
e26508dec703-0
|
Overview
This document describes how to make backup copies of your indexes using collections.
To learn how to create an index from a collection, see Manage indexes.
⚠️WarningThis document uses collections. This is a public preview
feature. Test thoroughly before using this feature with production workloads.
Create a backup using a collection
To create a backup of your index, use the create_collection operation. A collection is a static copy of your index that only consumes storage.
Example
The following example creates a collection named example-collection from an index named example-index.
PythonJavaScriptcurlpinecone.create_collection("example-collection", "example-index")
await pinecone.createCollection({
name: "example-collection",
source: "example-index",
});
curl -i -X POST https://controller.us-west1-gcp.pinecone.io/collections \
-H 'Api-Key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{
"name": "example-collection",
"source": "example-index"
}'
|
https://docs.pinecone.io/docs/back-up-indexes
|
e26508dec703-1
|
Check the status of a collection
To retrieve the status of the process creating a collection and the size of the collection, use the describe_collection operation. Specify the name of the collection to check. You can only call describe_collection on a collection in the current project.
The describe_collection operation returns an object containing key-value pairs representing the name of the collection, the size in bytes, and the creation status of the collection.
Example
The following example gets the creation status and size of a collection named example-collection.
PythonJavaScriptcurlpinecone.describe_collection("example-collection")
const collectionDescription = await pinecone.describeCollection(
"example-collection"
);
console.log(collectionDescription.data);
curl -i -X GET https://controller.us-west1-gcp.pinecone.io/collections/example-collection \
-H 'Api-Key: YOUR_API_KEY'
Results:
ShellCollectionDescription(name='test-collection', size=3818809, status='Ready')
List your collections
To get a list of the collections in the current project, use the list_collections operation.
Example
The following example gets a list of all collections in the current project.
PythonJavaScriptcurlpinecone.list_collections()
const collections = await pinecone.listCollections();
console.log(collections.data);
curl -i -X GET https://controller.us-west1-gcp.pinecone.io/collections \
-H 'Api-Key: YOUR_API_KEY'
Results
Shellexample-collection
|
https://docs.pinecone.io/docs/back-up-indexes
|
e26508dec703-2
|
Results
Shellexample-collection
Delete a collection
To delete a collection, use the delete_collection operation. Specify the name of the collection to delete.
Deleting the collection takes several minutes. During this time, the describe_collection operation returns the status "deleting".
Example
The following example deletes the collection example-collection.
PythonJavaScriptcurlpinecone.delete_collection("example-collection")
await pinecone.deleteCollection("example-collection");
curl -i -X DELETE https://controller.us-west1-gcp.pinecone.io/collections/example-collection \
-H 'Api-Key: YOUR_API_KEY'
Updated about 1 month ago Understanding collectionsInsert dataDid this page help you?YesNo
|
https://docs.pinecone.io/docs/back-up-indexes
|
3d66538aad8c-0
|
In this topic, we explain how you can scale your indexes horizontally and vertically.
Projects in the gcp-starter environment do not support the features referred to here, including pods, replicas, and collections.
Vertical vs. horizontal scaling
If you need to scale your environment to accommodate more vectors, you can modify your existing index to scale it vertically or create a new index and scale horizontally. This article will describe both methods and how to scale your index effectively.
Vertical scaling
Scaling vertically is fast and involves no downtime. This is a good choice when you can't pause upserts and must continue serving traffic. It also allows you to double your capacity instantly. However, there are some factors to consider.
By changing the pod size, you can scale to x2, x4, and x8 pod sizes, which means you are doubling your capacity at each step. Moving up to a new capacity will effectively double the number of pods used at each step. If you need to scale by smaller increments, then consider horizontal scaling.
The number of base pods you specify when you initially create the index is static and cannot be changed. For example, if you start with 10 pods of p1.x1 and vertically scale to p1.x2, this equates to 20 pods worth of usage. Neither can you change pod types with vertical scaling. If you want to change your pod type while scaling, then horizontal scaling is the better option.
|
https://docs.pinecone.io/docs/scaling-indexes
|
3d66538aad8c-1
|
You can only scale index sizes up and cannot scale them back down.
See our learning center for more information on vertical scaling.
Horizontal scaling
There are two approaches to horizontal scaling in Pinecone: adding pods and adding replicas. Adding pods increases all resources but requires a pause in upserts; adding replicas only increases throughput and requires no pause in upserts.
Adding pods
Adding pods to an index increases all resources, including available capacity. Adding pods to an existing index is possible using our collections feature. A collection is an immutable snapshot of your index in time: a collection stores the data but not the original index definition.
When you create an index from a collection, you define the new index configuration. This allows you to scale the base pod count horizontally without scaling vertically. The main advantage of this approach is that you can scale incrementally instead of doubling capacity as with vertical scaling. Also, you can redefine pod types if you are experimenting or if you need to use a different pod type, such asperformance-optimized pods or storage-optimized pods. Another advantage of this method is that you can change your metadata configuration to redefine metadata fields as indexed or stored-only. This is important when tuning your index for the best throughput.
Here are the general steps to make a copy of your index and create a new index while changing the pod type, pod count, metadata configuration, replicas, and all typical parameters when creating a new collection:
|
https://docs.pinecone.io/docs/scaling-indexes
|
3d66538aad8c-2
|
Pause upserts.
Create a collection from the current index.
Create an index from the collection with new parameters.
Continue upserts to the newly created index. Note: the URL has likely changed.
Delete the old index if desired.
Adding replicas
Each replica duplicates the resources and data in an index. This means that adding additional replicas increases the throughput of the index but not its capacity. However, adding replicas does not require downtime.
Throughput in terms of queries per second (QPS) scales linearly with the number of replicas per index.
To add replicas, use the configure_index operation to increase the number of replicas for your index.
Next steps
See our learning center for more information on vertical scaling.
Learn more about collections.
Updated about 1 month ago Manage indexesUnderstanding collectionsDid this page help you?YesNo
|
https://docs.pinecone.io/docs/scaling-indexes
|
15c98f511782-0
|
In this section, we explain how you can get a list of your indexes, create an index, delete an index, and describe an index.
To learn about the concepts related to indexes, see Indexes.
⚠️WarningIndexes on the Starter (free) plan are deleted after 7 days of inactivity. To
prevent this, send any API request or log into the console. This will count
as activity.
Getting information on your indexes
List all your Pinecone indexes:
PythonJavaScriptcurlpinecone.list_indexes()
await pinecone.listIndexes();
curl -i https://controller.YOUR_ENVIRONMENT.pinecone.io/databases \
-H 'Api-Key: YOUR_API_KEY'
Get the configuration and current status of an index named "pinecone-index":
PythonJavaScriptcurlpinecone.describe_index("pinecone-index")
await pinecone.describeIndex(indexName);
curl -i -X GET https://controller.YOUR_ENVIRONMENT.pinecone.io/databases/example-index \
-H 'Api-Key: YOUR_API_KEY'
|
https://docs.pinecone.io/docs/manage-indexes
|
15c98f511782-1
|
Creating an index
The simplest way to create an index is as follows. This gives you an index with a single pod that will perform approximate nearest neighbor (ANN) search using cosine similarity:
PythonJavaScriptcurlpinecone.create_index("example-index", dimension=128)
await pinecone.createIndex({
name: "example-index",
dimension: 128,
});
curl -i -X POST https://controller.YOUR_ENVIRONMENT.pinecone.io/databases \
-H 'Api-Key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{
"name": "example-index",
"dimension": 128
}'
|
https://docs.pinecone.io/docs/manage-indexes
|
15c98f511782-2
|
A more complex index can be created as follows. This creates an index that measures similarity by Euclidean distance and runs on 4 s1 (storage-optimized) pods of size x1:
PythonJavaScriptcurlpinecone.create_index("example-index", dimension=128, metric="euclidean", pods=4, pod_type="s1.x1")
await pinecone.createIndex({
name: "example-index",
dimension: 128,
metric: "euclidean",
pods: 4,
podType: "s1.x1",
});
curl -i -X POST https://controller.YOUR_ENVIRONMENT.pinecone.io/databases \
-H 'Api-Key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{
"name": "example-index",
"dimension": 128,
"metric": "euclidean",
"pods": 4,
"pod_type": "p1.x1"
}'
|
https://docs.pinecone.io/docs/manage-indexes
|
15c98f511782-3
|
Create an index from a collection
To create an index from a collection, use the create_index operation and provide a source_collection parameter containing the name of the collection from which you wish to create an index. The new index is queryable and writable.
Creating an index from a collection generally takes about 10 minutes. Creating a p2 index from a collection can take several hours when the number of vectors is on the order of 1M.
Example
The following example creates an index named example-index with 128 dimensions from a collection named example-collection.
PythonJavaScriptcurlpinecone.create_index("example-index", dimension=128, source_collection="example-collection")
await pinecone.createIndex({
name: "example-index",
dimension: 128,
sourceCollection: "example-collection",
});
curl -i -X POST https://controller.us-west1-gcp.pinecone.io/databases \
-H 'Api-Key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{
"name": "example-index",
"source_collection":"example-collection"}
}'
|
https://docs.pinecone.io/docs/manage-indexes
|
15c98f511782-4
|
For more information about each pod type and size, see Indexes.
For the full list of parameters available to customize an index, see the create_index API reference.
Changing pod sizes
The default pod size is x1. After index creation, you can increase the pod size for an index.
Increasing the pod size of your index does not result in downtime. Reads and writes continue uninterrupted during the scaling process. Currently, you cannot reduce the pod size of your indexes. Your number of replicas and your total number of pods remain the same, but each pod changes size. Resizing completes in about 10 minutes.
To learn more about pod sizes, see Indexes.
Increasing the pod size for an index
To change the pod size of an existing index, use the configure_index operation and append the new size to the pod_type parameter, separated by a period (.).
Projects in the gcp-starter environment do not use pods.
Example
The following example assumes that my_index has size x1 and changes the size to x2.
PythonJavaScriptcurlpinecone.configure_index("my_index", pod_type="s1.x2")
await client.configureIndex("my_index", {
pod_type: "s1.x2",
});
curl -i -X PATCH https://controller.us-west1-gcp.pinecone.io/databases/example-index \
-H 'Api-Key: YOUR_API_KEY' \
|
https://docs.pinecone.io/docs/manage-indexes
|
15c98f511782-5
|
-H 'Content-Type: application/json' \
-d '{
"pod_type": "s1.x2"
}
}'
|
https://docs.pinecone.io/docs/manage-indexes
|
15c98f511782-6
|
Checking the status of a pod size change
To check the status of a pod size change, use the describe_index operation. The status field in the results contains the key-value pair "state":"ScalingUp" or "state":"ScalingDown" during the resizing process and the key-value pair "state":"Ready" after the process is complete.
The index fullness metric provided by describe_index_stats may be inaccurate until the resizing process is complete.
Example
The following example uses describe_index to get the index status of the index example-index. The status field contains the key-value pair "state":"ScalingUp", indicating that the resizing process is still ongoing.
PythonJavaScriptcurlpinecone.describe_index("example-index")
await pinecone.describeIndex({
name: "example-index",
});
curl -i -X GET https://controller.us-west1-gcp.pinecone.io/databases/example-index \
-H 'Api-Key: YOUR_API_KEY'
Results:
JSON{
"database": {
"name": "example-index",
"dimensions": "768",
"metric": "cosine",
"pods": 6,
"replicas": 2,
"shards": 3,
"pod_type": "p1.x2",
"index_config": {},
"status": {
"ready": true,
"state": "ScalingUp"
}
}
}
|
https://docs.pinecone.io/docs/manage-indexes
|
15c98f511782-7
|
Replicas
You can increase the number of replicas for your index to increase throughput (QPS). All indexes start with replicas=1.
Indexes in the gcp-starter environment do not support replicas.
Example
The following example uses the configure_index operation to set the number of replicas for the index example-index to 4.
PythonJavaScriptcurlpinecone.configure_index("example-index", replicas=4)
await pinecone.configureIndex("example-index", {
replicas: 4,
});
curl -i -X PATCH https://controller.us-west1-gcp.pinecone.io/databases/example-index \
-H 'Api-Key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{
"replicas": 4
}'
|
https://docs.pinecone.io/docs/manage-indexes
|
15c98f511782-8
|
See the configure_index API reference for more details.
Selective metadata indexing
By default, Pinecone indexes all metadata. When you index metadata fields, you can filter vector search queries using those fields. When you store metadata fields without indexing them, you keep memory utilization low, especially when you have many unique metadata values, and therefore can fit more vectors per pod.
Searches without metadata filters do not consider metadata. To combine keywords with semantic search, see sparse-dense embeddings.
When you create a new index, you can specify which metadata fields to index using the metadata_config parameter. Projects on the gcp-starter environment do not support the metadata_config parameter.
Example
PythonJavaScriptcurlmetadata_config = {
"indexed": ["metadata-field-name"]
}
pinecone.create_index("example-index", dimension=128,
metadata_config=metadata_config)
pinecone.createIndex({
name: "example-index",
dimension: 128,
metadata_config: {
indexed: ["metadata-field-name"],
},
});
curl -i -X POST https://controller.YOUR_ENVIRONMENT.pinecone.io/databases \
-H 'Api-Key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{
"name": "example-index",
"dimension": 128,
"metadata_config": {
"indexed": ["metadata-field-name"]
}
}'
|
https://docs.pinecone.io/docs/manage-indexes
|
15c98f511782-9
|
The value for the metadata_config parameter is a JSON object containing the names of the metadata fields to index.
JSON{
"indexed": [
"metadata-field-1",
"metadata-field-2",
"metadata-field-n"
]
}
When you provide a metadata_config object, Pinecone only indexes the metadata fields present in that object: any metadata fields absent from the metadata_config object are not indexed.
When a metadata field is indexed, you can filter your queries using that metadata field; if a metadata field is not indexed, metadata filtering ignores that field.
Examples
The following example creates an index that only indexes the genre metadata field. Queries against this index that filter for the genre metadata field may return results; queries that filter for other metadata fields behave as though those fields do not exist.
PythonJavaScriptcurlmetadata_config = {
"indexed": ["genre"]
}
|
https://docs.pinecone.io/docs/manage-indexes
|
15c98f511782-10
|
pinecone.create_index("example-index", dimension=128,
metadata_config=metadata_config)
pinecone.createIndex({
name: "example-index",
dimension: 128,
metadata_config: {
indexed: ["genre"],
},
});
curl -i -X POST https://controller.us-west1-gcp.pinecone.io/databases \
-H 'Api-Key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{
"name": "example-index",
"dimension": 128,
"metadata_config": {
"indexed": ["genre"]
}
}'
Deleting an index
This operation will delete all of the data and the computing resources associated with the index.
ℹ️NoteWhen you create an index, it runs as a service until you delete it. Users are
billed for running indexes, so we recommend you delete any indexes you're not
using. This will minimize your costs.
Delete a Pinecone index named "pinecone-index":
PythonJavaScriptcurlpinecone.delete_index("example-index")
pinecone.deleteIndex("example-index");
curl -i -X DELETE https://controller.YOUR_ENVIRONMENT.pinecone.io/databases/example-index \
-H 'Api-Key: YOUR_API_KEY'
Updated 4 days ago Understanding indexesScale indexesDid this page help you?YesNo
|
https://docs.pinecone.io/docs/manage-indexes
|
76a3b703550b-0
|
Overview
This document describes concepts related to Pinecone indexes. To learn how to create or modify an index, see Manage indexes.
An index is the highest-level organizational unit of vector data in Pinecone. It accepts and stores vectors, serves queries over the vectors it contains, and does other vector operations over its contents. Each index runs on at least one pod.
Pods, pod types, and pod sizes
Pods are pre-configured units of hardware for running a Pinecone service. Each index runs on one or more pods. Generally, more pods mean more storage capacity, lower latency, and higher throughput. You can also create pods of different sizes.
Once an index is created using a particular pod type, you cannot change the pod type for that index. However, you can create a new index from that collection with a different pod type.
Different pod types are priced differently. See pricing for more details.
Starter plan
When using the starter plan, you can create one pod with enough resources to support approximately 100,000 vectors with 1536-dimensional embeddings and metadata; the capacity is proportional for other dimensions.
When using a starter plan, all create_index calls ignore the pod_type parameter.
s1 pods
These storage-optimized pods provide large storage capacity and lower overall costs with slightly higher query latencies than p1 pods. They are ideal for very large indexes with moderate or relaxed latency requirements.
|
https://docs.pinecone.io/docs/indexes
|
76a3b703550b-1
|
Each s1 pod has enough capacity for around 5M vectors of 768 dimensions.
p1 pods
These performance-optimized pods provide very low query latencies, but hold fewer vectors per pod than s1 pods. They are ideal for applications with low latency requirements (<100ms).
Each p1 pod has enough capacity for around 1M vectors of 768 dimensions.
p2 pods
The p2 pod type provides greater query throughput with lower latency. For vectors with fewer than 128 dimension and queries where topK is less than 50, p2 pods support up to 200 QPS per replica and return queries in less than 10ms. This means that query throughput and latency are better than s1 and p1.
Each p2 pod has enough capacity for around 1M vectors of 768 dimensions. However, capacity may vary with dimensionality.
The data ingestion rate for p2 pods is significantly slower than for p1 pods; this rate decreases as the number of dimensions increases. For example, a p2 pod containing vectors with 128 dimensions can upsert up to 300 updates per second; a p2 pod containing vectors with 768 dimensions or more supports upsert of 50 updates per second. Because query latency and throughput for p2 pods vary from p1 pods, test p2 pod performance with your dataset.
The p2 pod type does not support sparse vector values.
Pod size and performance
|
https://docs.pinecone.io/docs/indexes
|
76a3b703550b-2
|
Pod size and performance
Pod performance varies depending on a variety of factors. To observe how your workloads perform on a given pod type, experiment with your own data set.
Each pod type supports four pod sizes: x1, x2, x4, and x8. Your index storage and compute capacity doubles for each size step. The default pod size is x1. You can increase the size of a pod after index creation.
To learn about changing the pod size of an index, see Manage indexes.
Distance metrics
You can choose from different metrics when creating a vector index:
|
https://docs.pinecone.io/docs/indexes
|
76a3b703550b-3
|
euclidean
This is used to calculate the distance between two data points in a plane. It is one of the most commonly used distance metric. For an example, see our image similarity search example.
When you use metric='euclidean', the most similar results are those with the lowest score.
cosine
This is often used to find similarities between different documents. The advantage is that the scores are normalized to [-1,1] range.
dotproduct
This is used to multiply two vectors. You can use it to tell us how similar the two vectors are. The more positive the answer is, the closer the two vectors are in terms of their directions.
For the full list of parameters available to customize an index, see the create_index API reference.
Depending on your application, some metrics have better recall and precision performance than others. For more information, see: What is Vector Similarity Search?Updated 29 days ago gcp-starter environmentManage indexesDid this page help you?YesNo
|
https://docs.pinecone.io/docs/indexes
|
61bf244afc2f-0
|
Overview
If you are a project owner, follow these steps to change the name of your project.
Access the Pinecone Console.
Click Settings in the left menu.
In the Settings view, click the PROJECTS tab.
Next to the project you want to update, click .
Under Project Name, enter the new project name.
Click SAVE CHANGES.
Updated about 1 month ago Change project pod limitgcp-starter environmentDid this page help you?YesNo
|
https://docs.pinecone.io/docs/rename-project
|
02acbd55900d-0
|
Overview
If you are a project owner, follow these steps to change the maximum total number of pods in your project.
Change project pod limit in console
Access the Pinecone Console.
Click Settings in the left menu.
In the Settings view, click the PROJECTS tab.
Next to the project you want to update, click .
Under Pod Limit, enter the new number of pods.
Click SAVE CHANGES.
Updated about 1 month ago Add users to projects and organizationsRename a projectDid this page help you?YesNo
|
https://docs.pinecone.io/docs/change-project-pod-limit
|
4b96f1fef33d-0
|
Overview
If you are a project or organization owner, follow these steps to add users to organizations and projects.
Add users to projects and organizations
Access the Pinecone Console.
Click Settings in the left menu.
In the Settings view, click the USERS tab.
Click +INVITE USER.
(Organization owner only) Select an organization role.
Select one or more projects.
Select a project role.
Enter the user's email address.
Click +INVITE USER.
When you invite another user to join your organization or project, Pinecone sends them an email containing a link that enables them to gain access to the organization or project. If they already have a Pinecone account, they still receive an email, but they can also immediately view the project.Updated 13 days ago Create a projectChange project pod limitDid this page help you?YesNo
|
https://docs.pinecone.io/docs/add-users-to-projects-and-organizations
|
e4c8b57346db-0
|
Overview
ℹ️InfoStarter (free) users can only have 1 owned project. To create a new project, Starter users must upgrade to the Standard or Enterprise plan or delete their default project.
Follow these steps to create a new project:
Access the Pinecone Console.
Click Organizations in the left menu.
In the Organizations view, click the PROJECTS tab.
Click the +CREATE PROJECT button.
Enter the Project Name.
Select a cloud provider and region.
Enter the project pod limit.
Click CREATE PROJECT.
Next steps
Add users to your project.
Create an index.
Updated 13 days ago Understanding projectsAdd users to projects and organizationsDid this page help you?YesNo
|
https://docs.pinecone.io/docs/create-project
|
aa8954e9aa20-0
|
Overview
This document explains the concepts related to Pinecone projects.
Projects contain indexes and users
Each Pinecone project contains a number of indexes and users. Only a user who belongs to the project can access the indexes in that project. Each project also has at least one project owner. All of the pods in a single project are located in a single environment.
Project settings
When you create a new project, you can choose the name, deployment environment, and pod limit.
Project environment
When creating a project, you must choose a cloud environment for the indexes in that project. The following table lists the available cloud regions, the corresponding values of the environment parameter for the init() operation, and which billing tier has access to each environment:
|
https://docs.pinecone.io/docs/projects
|
aa8954e9aa20-1
|
Cloud regionenvironment valueTier availabilityGCP Starter (Iowa)*gcp-starterStarterGCP US-West-1 Free (N. California)us-west1-gcp-freeStarterGCP Asia-Southeast-1 (Singapore)asia-southeast1-gcp-freeStarterGCP US-West-4 (Las Vegas)us-west4-gcpStarterGCP US-West-1 (N. California)us-west1-gcpStandard / EnterpriseGCP US-Central-1 (Iowa)us-central1-gcpStandard / EnterpriseGCP US-West-4 (Las Vegas)us-west4-gcpStandard / EnterpriseGCP US-East-4 (Virginia)us-east4-gcpStandard / EnterpriseGCP northamerica-northeast-1northamerica-northeast1-gcpStandard / EnterpriseGCP Asia-Northeast-1 (Japan)asia-northeast1-gcpStandard / EnterpriseGCP Asia-Southeast-1 (Singapore)asia-southeast1-gcpStandard / EnterpriseGCP US-East-1 (South Carolina)us-east1-gcpStandard / EnterpriseGCP EU-West-1 (Belgium)eu-west1-gcpStandard / EnterpriseGCP EU-West-4 (Netherlands)eu-west4-gcpStandard / EnterpriseAWS US-East-1 (Virginia)us-east1-awsStandard / Enterprise
|
https://docs.pinecone.io/docs/projects
|
aa8954e9aa20-2
|
* This environment has unique features and limitations. See gcp-starter environment for more information.
Contact us if you need a dedicated deployment in other regions.
The environment cannot be changed after the project is created.
Project pod limit
You can set the maximum number of pods that can be used in total across all indexes in a project. Use this to control costs.
The pod limit can be changed only by the project owner.
Project roles
There are two project roles: Project owner and project member. Table 1 below summarizes the permissions for each role.
Table 1: Project roles and permissions
|
https://docs.pinecone.io/docs/projects
|
aa8954e9aa20-3
|
Project rolePermissions in organizationProject ownerManage project membersManage project API keysManage pod limitsProject memberAccess API keysCreate indexes in projectUse indexes in project
API keys
Each Pinecone project has one or more API keys. In order to make calls to the Pinecone API, a user must provide a valid API key for the relevant Pinecone project.
To view the API key for your project, open the Pinecone console, select the project, and click API Keys.Updated 11 days ago Setting up billing through GCP MarketplaceCreate a projectDid this page help you?YesNo
|
https://docs.pinecone.io/docs/projects
|
e154854ddb3c-0
|
Introduction
When planning your Pinecone deployment, it is important to understand the approximate storage requirements of your vectors to choose the appropriate pod type and number. This page will give guidance on sizing to help you plan accordingly.
As with all guidelines, these considerations are general and may not apply to your specific use case. We caution you to always test your deployment and ensure that the index configuration you are using is appropriate to your requirements.
Collections make it easy to create new versions of your index with different pod types and sizes, and we encourage you to take advantage of that feature to test different configurations. This guide is merely an overview of sizing considerations and should not be taken as a definitive guide.
Users on the Standard, Enterprise, and Enterprise Dedicated plans can contact support for further help with sizing and testing.
Overview
There are five main considerations when deciding how to configure your Pinecone index:
Number of vectors
Dimensionality of your vectors
Size of metadata on each vector
QPS throughput
Cardinality of indexed metadata
|
https://docs.pinecone.io/docs/choosing-index-type-and-size
|
e154854ddb3c-1
|
Each of these considerations comes with requirements for index size, pod type, and replication strategy.
Number of vectors
The most important consideration in sizing is the number of vectors you plan on working with. As a rule of thumb, a single p1 pod can store approximately 1M vectors, while a s1 pod can store 5M vectors. However, this can be affected by other factors, such as dimensionality and metadata, which are explained below.
Dimensionality of vectors
The rules of thumb above for how many vectors can be stored in a given pod assumes a typical configuration of 768 dimensions per vector. As your individual use case will dictate the dimensionality of your vectors, the amount of space required to store them may necessarily be larger or smaller.
Each dimension on a single vector consumes 4 bytes of memory and storage per dimension, so if you expect to have 1M vectors with 768 dimensions each, that’s about 3GB of storage without factoring in metadata or other overhead. Using that reference, we can estimate the typical pod size and number needed for a given index. Table 1 below gives some examples of this.
Table 1: Estimated number of pods per 1M vectors by dimensionality
|
https://docs.pinecone.io/docs/choosing-index-type-and-size
|
e154854ddb3c-2
|
Pod typeDimensionsEstimated max vectors per podp15121,250,0007681,000,0001024675,000p25121,250,0007681,100,00010241,000,000s15128,000,0007685,000,00010244,000,000
Pinecone does not support fractional pod deployments, so always round up to the next nearest whole number when choosing your pods.
Queries per second (QPS)
QPS speeds are governed by a combination of the pod type of the index, the number of replicas, and the top_k value of queries. The pod type is the primary factor driving QPS, as the different pod types are optimized for different approaches.
The p1 pods are performance-optimized pods which provide very low query latencies, but hold fewer vectors per pod than s1 pods. They are ideal for applications with low latency requirements (<100ms). The s1 pods are optimized for storage and provide large storage capacity and lower overall costs with slightly higher query latencies than p1 pods. They are ideal for very large indexes with moderate or relaxed latency requirements.
The p2 pod type provides greater query throughput with lower latency. They support 200 QPS per replica and return queries in less than 10ms. This means that query throughput and latency are better than s1 and p1, especially for low dimension vectors (<512D).
|
https://docs.pinecone.io/docs/choosing-index-type-and-size
|
e154854ddb3c-3
|
As a rule, a single p1 pod with 1M vectors of 768 dimensions each and no replicas can handle about 20 QPS. It’s possible to get greater or lesser speeds, depending on the size of your metadata, number of vectors, the dimensionality of your vectors, and the top_K value for your search. See Table 2 below for more examples.
Table 2: QPS by pod type and top_k value*
|
https://docs.pinecone.io/docs/choosing-index-type-and-size
|
e154854ddb3c-4
|
Pod typetop_k 10top_k 250top_k 1000p1302520p21505020s1101010
*The QPS values in Table 2 represent baseline QPS with 1M vectors and 768 dimensions.
Adding replicas is the simplest way to increase your QPS. Each replica increases the throughput potential by roughly the same QPS, so aiming for 150 QPS using p1 pods means using the primary pod and 5 replicas. Using threading or multiprocessing in your application is also important, as issuing single queries sequentially still subjects you to delays from any underlying latency. The Pinecone gRPC client can also be used to increase throughput of upserts.
Metadata cardinality and size
The last consideration when planning your indexes is the cardinality and size of your metadata. While the increases are small when talking about a few million vectors, they can have a real impact as you grow to hundreds of millions or billions of vectors.
Indexes with very high cardinality, like those storing a unique user ID on each vector, can have significant memory requirements, resulting in fewer vectors fitting per pod. Also, if the size of the metadata per vector is larger, the index requires more storage. Limiting which metadata fields are indexed using selective metadata indexing can help lower memory usage.
Pod sizes
|
https://docs.pinecone.io/docs/choosing-index-type-and-size
|
e154854ddb3c-5
|
Pod sizes
You can also start with one of the larger pod sizes, like p1.x2. Each step up in pod size doubles the space available for your vectors. We recommend starting with x1 pods and scaling as you grow. This way, you don’t start with too large a pod size and have nowhere else to go up, meaning you have to migrate to a new index before you’re ready.
Projects on the gcp-starter environment do not use pods.
Example applications
The following examples will showcase how to use the sizing guidelines above to choose the appropriate type, size, and number of pods for your index.
Example 1: Semantic search of news articles
In our first example, we’ll use the demo app for semantic search from our documentation. In this case, we’re only working with 204,135 vectors. The vectors use 300 dimensions each, well under the general measure of 768 dimensions. Using the rule of thumb above of up to 1M vectors per p1 pod, we can run this app comfortably with a single p1.x1 pod.
Example 2: Facial recognition
|
https://docs.pinecone.io/docs/choosing-index-type-and-size
|
e154854ddb3c-6
|
Example 2: Facial recognition
For this example, suppose you’re building an application to identify customers using facial recognition for a secure banking app. Facial recognition can work with as few as 128 dimensions, but in this case, because the app will be used for access to finances, we want to make sure we’re certain that the person using it is the right one. We plan for 100M customers and use 2048 dimensions per vector.
We know from our rules of thumb above that 1M vectors with 768 dimensions fit nicely in a p1.x1 pod. We can just divide those numbers into the new targets to get the ratios we’ll need for our pod estimate:
100M / 1M = 100 base p1 pods
2048 / 768 = 2.667 vector ratio
2.667 * 100 = 267 rounding up
|
https://docs.pinecone.io/docs/choosing-index-type-and-size
|
e154854ddb3c-7
|
So we need 267 p1.x1 pods. We can reduce that by switching to s1 pods instead, sacrificing latency by increasing storage availability. They hold five times the storage of p1.x1, so the math is simple:
267 / 5 = 54 rounding up
So we estimate that we need 54 s1.x1 pods to store very high dimensional data for the face of each of the bank’s customers.Updated about 1 month ago QuickstartUnderstanding organizationsDid this page help you?YesNo
|
https://docs.pinecone.io/docs/choosing-index-type-and-size
|
7c8ea518c360-0
|
This guide explains how to set up a Pinecone vector database in minutes.
1. Install Pinecone client (optional)
This step is optional. Do this step only if you want to use the Python client.
Use the following shell command to install Pinecone:
PythonJavaScriptpip install pinecone-client
npm i @pinecone-database/pinecone
For other clients, see Libraries.
2. Get and verify your Pinecone API key
To use Pinecone, you must have an API key. To find your API key, open the Pinecone console and click API Keys. This view also displays the environment for your project. Note both your API key and your environment.
To verify that your Pinecone API key works, use the following commands:
PythonJavaScriptcurlimport pinecone
pinecone.init(api_key="YOUR_API_KEY", environment="YOUR_ENVIRONMENT")
import { PineconeClient } from '@pinecone-database/pinecone';
const pinecone = new PineconeClient();
await pinecone.init({
apiKey: "YOUR_API_KEY",
environment: "YOUR_ENVIRONMENT",
});
curl -i https://controller.YOUR_ENVIRONMENT.pinecone.io/actions/whoami -H 'Api-Key: YOUR_API_KEY'
If you don't receive an error message, then your API key is valid.
3. Hello, Pinecone!
You can complete the remaining steps in three ways:
|
https://docs.pinecone.io/docs/quickstart
|
7c8ea518c360-1
|
Use the "Hello, Pinecone!" colab notebook to write and execute Python in your browser.
Copy the commands below into your local installation of Python.
Use the cURL API commands below.
1. Initialize Pinecone
PythonJavaScriptcurlimport pinecone
pinecone.init(api_key="YOUR_API_KEY", environment="YOUR_ENVIRONMENT")
import { PineconeClient } from '@pinecone-database/pinecone';
const pinecone = new PineconeClient();
await pinecone.init({
apiKey: "YOUR_API_KEY",
environment: "YOUR_ENVIRONMENT",
});
# Not applicable
2. Create an index.
The commands below create an index named "quickstart" that performs approximate nearest-neighbor search using the Euclidean distance metric for 8-dimensional vectors.
Index creation takes roughly a minute.
PythonJavaScriptcurlpinecone.create_index("quickstart", dimension=8, metric="euclidean")
const createRequest = {
name: "quickstart",
dimension: 8,
metric:"euclidean",
};
await pinecone.createIndex({ createRequest });
curl -i -X POST \
-H 'Content-Type: application/json' \
-H 'Api-Key: YOUR_API_KEY_HERE' \
https://controller.YOUR_ENVIRONMENT.pinecone.io/databases \
-d '{
"name": "quickstart",
"dimension": 8,
"metric": "euclidean"
}'
|
https://docs.pinecone.io/docs/quickstart
|
7c8ea518c360-2
|
⚠️WarningIn general, indexes on the Starter (free) plan are archived as collections and deleted after 7 days of inactivity; for indexes created by certain open source projects such as AutoGPT, indexes are archived and deleted after 1 day of inactivity. To prevent this, you can send any API request to Pinecone and the counter will reset.
3. Retrieve a list of your indexes.
Once your index is created, its name appears in the index list.
Use the following commands to return a list of your indexes.
PythonJavaScriptcurlpinecone.list_indexes()
# Returns:
# ['quickstart']
const list = await pinecone.listIndexes();
// Returns:
// [ 'quickstart' ]
curl -i https://controller.YOUR_ENVIRONMENT.pinecone.io/databases \
-H "Api-Key: YOUR_API_KEY"
# Output:
# ["quickstart"]
4. Connect to the index (Client only).
Before you can query your index using a client, you must connect to the index.
Use the following commands to connect to your index.
PythonJavaScriptcurlindex = pinecone.Index("quickstart")
const index = pinecone.Index("quickstart");
# Not applicable
|
https://docs.pinecone.io/docs/quickstart
|
7c8ea518c360-3
|
5. Insert the data.
To ingest vectors into your index, use the upsert operation.
The upsert operation inserts a new vector in the index or updates the vector if a vector with the same ID is already present.
The following commands upsert 5 8-dimensional vectors into your index.
PythonJavaScriptcurl# Upsert sample data (5 8-dimensional vectors)
index.upsert([
("A", [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]),
("B", [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2]),
("C", [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3]),
("D", [0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4]),
|
https://docs.pinecone.io/docs/quickstart
|
7c8ea518c360-4
|
("E", [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5])
])
const upsertRequest = {
vectors: [
{
"id": "A",
"values": [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]
},
{
"id": "B",
"values": [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2]
},
{
"id": "C",
"values": [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3]
},
{
"id": "D",
"values": [0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4]
},
{
|
https://docs.pinecone.io/docs/quickstart
|
7c8ea518c360-5
|
},
{
"id": "E",
"values": [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]
}
]
};
await index.upsert({ upsertRequest });
curl -i -X POST https://quickstart-YOUR_PROJECT.svc.YOUR_ENVIRONMENT.pinecone.io/vectors/upsert \
-H 'Api-Key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{
"vectors": [
{
"id": "A",
"values": [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]
},
{
"id": "B",
"values": [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2]
},
{
"id": "C",
|
https://docs.pinecone.io/docs/quickstart
|
7c8ea518c360-6
|
"id": "C",
"values": [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3]
},
{
"id": "D",
"values": [0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4]
},
{
"id": "E",
"values": [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]
}
]
}'
|
https://docs.pinecone.io/docs/quickstart
|
7c8ea518c360-7
|
The cURL command above uses the endpoint for your Pinecone index.
ℹ️NoteWhen upserting larger amounts of data, upsert data in batches of 100 vectors or fewer over multiple upsert requests.
6. Get statistics about your index.
The following commands return statistics about the contents of your index.
PythonJavaScriptcurlindex.describe_index_stats()
# Returns:
# {'dimension': 8, 'index_fullness': 0.0, 'namespaces': {'': {'vector_count': 5}}}
const indexStats = await index.describeIndexStats({
describeIndexStatsRequest: {},
});
// Returns:
/** {
"namespaces": {
"": {
"vectorCount": 5
}
},
"dimension": 8
}
**/
curl -i https://quickstart-YOUR_PROJECT.svc.YOUR_ENVIRONMENT.pinecone.io/describe_index_stats \
-H 'Api-Key: YOUR_API_KEY'
# Output:
# {
# "namespaces": {
# "": {
# "vectorCount": 5
# }
# },
# "dimension": 8
# }
|
https://docs.pinecone.io/docs/quickstart
|
7c8ea518c360-8
|
7. Query the index and get similar vectors.
The following example queries the index for the three (3) vectors that are most similar to an example 8-dimensional vector using the Euclidean distance metric specified in step 2 ("Create an index.") above.
PythonJavaScriptcurlindex.query(
vector=[0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3],
top_k=3,
include_values=True
)
# Returns:
# {'matches': [{'id': 'C',
# 'score': 0.0,
# 'values': [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3]},
# {'id': 'D',
# 'score': 0.0799999237,
# 'values': [0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4]},
# {'id': 'B',
# 'score': 0.0800000429,
|
https://docs.pinecone.io/docs/quickstart
|
7c8ea518c360-9
|
# 'values': [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2]}],
# 'namespace': ''}
const queryRequest = {
topK: 3,
vector: [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3],
includeValues: true
};
|
https://docs.pinecone.io/docs/quickstart
|
7c8ea518c360-10
|
const queryResponse = await index.query({ queryRequest });
// Returns:
/** {
"results": [],
"matches": [{
"id": "C",
"score": 0,
"values": [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3]
}, {
"id": "D",
"score": 0.0799999237,
"values": [0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4]
}, {
"id": "B",
"score": 0.0800000429,
"values": [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2]
}],
"namespace": ""
}
**/
curl -i -X POST https://quickstart-YOUR_PROJECT.svc.YOUR_ENVIRONMENT.pinecone.io/query \
-H 'Api-Key: YOUR_API_KEY' \
|
https://docs.pinecone.io/docs/quickstart
|
7c8ea518c360-11
|
-H 'Content-Type: application/json' \
-d '{
"vector": [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3],
"topK": 3,
"includeValues": true
}'
|
https://docs.pinecone.io/docs/quickstart
|
7c8ea518c360-12
|
# Output:
# {
# "matches":[
# {
# "id": "C",
# "score": -1.76717265e-07,
# "values": [0.3,0.3,0.3,0.3,0.3,0.3,0.3,0.3]
# },
# {
# "id": "B",
# "score": 0.080000028,
# "values": [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2]
# },
# {
# "id": "D",
# "score": 0.0800001323,
# "values": [0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4]
# }
# ],
# "namespace": ""
# }
|
https://docs.pinecone.io/docs/quickstart
|
7c8ea518c360-13
|
8. Delete the index.
Once you no longer need the index, use the delete_index operation to delete it.
The following commands delete the index.
PythonJavaScriptcurlpinecone.delete_index("quickstart")
await pinecone.deleteIndex({ indexName:"quickstart" });
curl -i -X DELETE https://controller.YOUR_ENVIRONMENT.pinecone.io/databases/quickstart \
-H 'Api-Key: YOUR_API_KEY'
⚠️WarningAfter you delete an index, you cannot use it again.
Next steps
Now that you’re successfully making indexes with your API key, you can start inserting data or view more examples.Updated 4 days ago OverviewChoosing index type and sizeDid this page help you?YesNo
|
https://docs.pinecone.io/docs/quickstart
|
a127577ba5d8-0
|
Pinecone Overview
Pinecone makes it easy to provide long-term memory for high-performance AI applications. It’s a managed, cloud-native vector database with a simple API and no infrastructure hassles. Pinecone serves fresh, filtered query results with low latency at the scale of billions of vectors.
Vector embeddings provide long-term memory for AI.
Applications that involve large language models, generative AI, and semantic search rely on vector embeddings, a type of data that represents semantic information. This information allows AI applications to gain understanding and maintain a long-term memory that they can draw upon when executing complex tasks.
Vector databases store and query embeddings quickly and at scale.
Vector databases like Pinecone offer optimized storage and querying capabilities for embeddings. Traditional scalar-based databases can’t keep up with the complexity and scale of such data, making it difficult to extract insights and perform real-time analysis. Vector indexes like FAISS lack useful features that are present in any database. Vector databases combine the familiar features of traditional databases with the optimized performance of vector indexes.
Pinecone indexes store records with vector data.
Each record in a Pinecone index contains a unique ID and an array of floats representing a dense vector embedding.
Each record may also contain a sparse vector embedding for hybrid search and metadata key-value pairs for filtered queries.
Pinecone queries are fast and fresh.
|
https://docs.pinecone.io/docs/overview
|
a127577ba5d8-1
|
Pinecone queries are fast and fresh.
Pinecone returns low-latency, accurate results for indexes with billions of vectors. High-performance pods return up to 200 queries per second per replica. Queries reflect up-to-the-second updates such as upserts and deletes. Filter by namespaces and metadata or add resources to improve performance.
Upsert and query vector embeddings with the Pinecone API.
Perform CRUD operations and query your vectors using HTTP, Python, or Node.js.
Pythonindex = pinecone.Index('example-index')
|
https://docs.pinecone.io/docs/overview
|
a127577ba5d8-2
|
upsert_response = index.upsert(
vectors=[
{'id': 'vec1',
'values': [0.1, 0.2, 0.3],
'metadata': {'genre': 'drama'},
'sparse_values': {
'indices': [10, 45, 16],
'values': [0.5, 0.5, 0.2]
}},
{'id': 'vec2',
'values': [0.2, 0.3, 0.4],
'metadata': {'genre': 'action'},
'sparse_values': {
'indices': [15, 40, 11],
'values': [0.4, 0.5, 0.2]
}}
],
namespace='example-namespace'
)
|
https://docs.pinecone.io/docs/overview
|
a127577ba5d8-3
|
Query your index for the most similar vectors.
Specify the distance metric your index uses to evaluate vector similarity, along with dimensions and replicas.
PythonJavaScriptcurlpinecone.create_index("example-index", dimension=128, metric="euclidean", pods=4, pod_type="s1.x1")
await pinecone.createIndex({
name: "example-index",
dimension: 128,
metric: "euclidean",
pods: 4,
podType: "s1.x1",
});
curl -i -X POST https://controller.YOUR_ENVIRONMENT.pinecone.io/databases \
-H 'Api-Key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{
"name": "example-index",
"dimension": 128,
"metric": "euclidean",
"pods": 4,
"pod_type": "p1.x1"
}'
Find the top k most similar vectors, or query by ID.
PythonJavaScriptcurlindex.query(
vector=[0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3],
top_k=3,
include_values=True
)
|
https://docs.pinecone.io/docs/overview
|
a127577ba5d8-4
|
# Returns:
# {'matches': [{'id': 'C',
# 'score': -1.76717265e-07,
# 'values': [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3]},
# {'id': 'B',
# 'score': 0.080000028,
# 'values': [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2]},
# {'id': 'D',
# 'score': 0.0800001323,
# 'values': [0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4]}],
# }
const index = pinecone.Index("example-index");
const queryRequest = {
vector: [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3],
topK: 3,
includeValues: true
};
const queryResponse = await index.query({ queryRequest });
|
https://docs.pinecone.io/docs/overview
|
a127577ba5d8-5
|
// Returns:
// {'matches': [{'id': 'C',
// 'score': -1.76717265e-07,
// 'values': [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3]},
// {'id': 'B',
// 'score': 0.080000028,
// 'values': [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2]},
// {'id': 'D',
// 'score': 0.0800001323,
// 'values': [0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4]}],
// }
curl -i -X POST https://hello-pinecone-YOUR_PROJECT.svc.YOUR_ENVIRONMENT.pinecone.io/query \
-H 'Api-Key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{
|
https://docs.pinecone.io/docs/overview
|
a127577ba5d8-6
|
-d '{
"vector":[0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3],
"topK": 3,
"includeValues": true
}'
|
https://docs.pinecone.io/docs/overview
|
a127577ba5d8-7
|
Get started
Go to the quickstart guide to get a production-ready vector search service up and running in minutes.Updated 3 days ago QuickstartDid this page help you?YesNo
|
https://docs.pinecone.io/docs/overview
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.