|
--- |
|
license: bsd-3-clause |
|
configs: |
|
- config_name: no-vectors |
|
data_files: no-vectors/*.parquet |
|
default: true |
|
- config_name: aws-titan-embed-text-v2 |
|
data_files: aws/titan-embed-text-v2/*.parquet |
|
- config_name: cohere-embed-multilingual-v3 |
|
data_files: cohere/embed-multilingual-v3/*.parquet |
|
- config_name: openai-text-embedding-3-small |
|
data_files: openai/text-embedding-3-small/*.parquet |
|
- config_name: openai-text-embedding-3-large |
|
data_files: openai/text-embedding-3-large/*.parquet |
|
- config_name: snowflake-arctic-embed |
|
data_files: ollama/snowflake-arctic/*.parquet |
|
size_categories: |
|
- 100K<n<1M |
|
--- |
|
|
|
## Loading dataset without vector embeddings |
|
|
|
You can load the raw dataset without vectors, like this: |
|
|
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset("weaviate/wiki-sample", split="train", streaming=True) |
|
``` |
|
|
|
## Loading dataset with vector embeddings |
|
|
|
You can also load the dataset with vectors, like this: |
|
|
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset("weaviate/wiki-sample", "openai-text-embedding-3-small", split="train", streaming=True) |
|
# dataset = load_dataset("weaviate/wiki-sample", "snowflake-arctic-embed", split="train", streaming=True) |
|
|
|
for item in dataset: |
|
print(item["text"]) |
|
print(item["title"]) |
|
print(item["url"]) |
|
print(item["wiki_id"]) |
|
print(item["vector"]) |
|
print() |
|
``` |
|
|
|
## Supported Datasets |
|
|
|
### Data only - no vectors |
|
|
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset("weaviate/wiki-sample", "no-vectors", split="train", streaming=True) |
|
``` |
|
|
|
You can also skip the config name, as "no-vectors is the default dataset: |
|
|
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset("weaviate/wiki-sample", split="train", streaming=True) |
|
``` |
|
|
|
### AWS |
|
|
|
**aws-titan-embed-text-v2** - 1024d vectors - generated with AWS Bedrock |
|
|
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset("weaviate/wiki-sample", "aws-titan-embed-text-v2", split="train", streaming=True) |
|
``` |
|
|
|
#### Weaviate collection configuration: |
|
|
|
```python |
|
from weaviate.classes.config import Configure |
|
|
|
client.collections.create( |
|
name="Wiki", |
|
|
|
vectorizer_config=[ |
|
Configure.NamedVectors.text2vec_aws( |
|
name="main_vector", |
|
model="amazon.titan-embed-text-v2:0", |
|
region="us-east-1", # make sure to use the correct region for you |
|
|
|
source_properties=['title', 'text'], # which properties should be used to generate a vector |
|
) |
|
], |
|
) |
|
``` |
|
|
|
### Cohere |
|
|
|
**embed-multilingual-v3** - 768d vectors - generated with Ollama |
|
|
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset("weaviate/wiki-sample", "cohere-embed-multilingual-v3", split="train", streaming=True) |
|
``` |
|
|
|
#### Weaviate collection configuration: |
|
|
|
```python |
|
from weaviate.classes.config import Configure |
|
|
|
client.collections.create( |
|
name="Wiki", |
|
|
|
vectorizer_config=[ |
|
Configure.NamedVectors.text2vec_cohere( |
|
name="main_vector", |
|
model="embed-multilingual-v3.0", |
|
|
|
source_properties=['title', 'text'], # which properties should be used to generate a vector |
|
) |
|
], |
|
) |
|
``` |
|
|
|
### OpenAI |
|
|
|
**text-embedding-3-small** - 1536d vectors - generated with OpenAI |
|
|
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset("weaviate/wiki-sample", "openai-text-embedding-3-small", split="train", streaming=True) |
|
``` |
|
|
|
#### Weaviate collection configuration: |
|
|
|
```python |
|
from weaviate.classes.config import Configure |
|
|
|
client.collections.create( |
|
name="Wiki", |
|
|
|
vectorizer_config=[ |
|
Configure.NamedVectors.text2vec_openai( |
|
name="main_vector", |
|
model="text-embedding-3-small", |
|
|
|
source_properties=['title', 'text'], # which properties should be used to generate a vector |
|
) |
|
], |
|
) |
|
``` |
|
|
|
**text-embedding-3-large** - 3072d vectors - generated with OpenAI |
|
|
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset("weaviate/wiki-sample", "openai-text-embedding-3-large", split="train", streaming=True) |
|
``` |
|
|
|
#### Weaviate collection configuration: |
|
|
|
```python |
|
from weaviate.classes.config import Configure |
|
|
|
client.collections.create( |
|
name="Wiki", |
|
|
|
vectorizer_config=[ |
|
Configure.NamedVectors.text2vec_openai( |
|
name="main_vector", |
|
model="text-embedding-3-large", |
|
|
|
source_properties=['title', 'text'], # which properties should be used to generate a vector |
|
) |
|
], |
|
) |
|
``` |
|
|
|
### Snowflake |
|
|
|
**snowflake-arctic-embed** - 1024d vectors - generated with Ollama |
|
|
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset("weaviate/wiki-sample", "snowflake-arctic-embed", split="train", streaming=True) |
|
``` |
|
|
|
#### Weaviate collection configuration: |
|
|
|
```python |
|
from weaviate.classes.config import Configure |
|
|
|
client.collections.create( |
|
name="Wiki", |
|
|
|
vectorizer_config=[ |
|
Configure.NamedVectors.text2vec_ollama( |
|
name="main_vector", |
|
model="snowflake-arctic-embed", |
|
api_endpoint="http://host.docker.internal:11434", # If using Docker |
|
|
|
source_properties=["title", "text"], |
|
), |
|
], |
|
) |
|
``` |
|
|