File size: 5,172 Bytes
ab1eda1 d79de42 2d5f709 62ea0e2 2d5f709 62ea0e2 83a2807 4c49644 d79de42 4c49644 d79de42 ab1eda1 59c4b73 ab1eda1 59c4b73 ab1eda1 59c4b73 4c49644 ab1eda1 d79de42 83a2807 d79de42 2d5f709 a303b66 2d5f709 a303b66 2d5f709 a303b66 2d5f709 ab1eda1 83a2807 a303b66 ab1eda1 83a2807 ab1eda1 a303b66 ab1eda1 2d5f709 ab1eda1 83a2807 ab1eda1 a303b66 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 |
---
license: bsd-3-clause
configs:
- config_name: no-vectors
data_files: no-vectors/*.parquet
default: true
- config_name: aws-titan-embed-text-v2
data_files: aws/titan-embed-text-v2/*.parquet
- config_name: cohere-embed-multilingual-v3
data_files: cohere/embed-multilingual-v3/*.parquet
- config_name: openai-text-embedding-3-small
data_files: openai/text-embedding-3-small/*.parquet
- config_name: openai-text-embedding-3-large
data_files: openai/text-embedding-3-large/*.parquet
- config_name: snowflake-arctic-embed
data_files: ollama/snowflake-arctic/*.parquet
size_categories:
- 100K<n<1M
---
## Loading dataset without vector embeddings
You can load the raw dataset without vectors, like this:
```python
from datasets import load_dataset
dataset = load_dataset("weaviate/wiki-sample", split="train", streaming=True)
```
## Loading dataset with vector embeddings
You can also load the dataset with vectors, like this:
```python
from datasets import load_dataset
dataset = load_dataset("weaviate/wiki-sample", "openai-text-embedding-3-small", split="train", streaming=True)
# dataset = load_dataset("weaviate/wiki-sample", "snowflake-arctic-embed", split="train", streaming=True)
for item in dataset:
print(item["text"])
print(item["title"])
print(item["url"])
print(item["wiki_id"])
print(item["vector"])
print()
```
## Supported Datasets
### Data only - no vectors
```python
from datasets import load_dataset
dataset = load_dataset("weaviate/wiki-sample", "no-vectors", split="train", streaming=True)
```
You can also skip the config name, as "no-vectors is the default dataset:
```python
from datasets import load_dataset
dataset = load_dataset("weaviate/wiki-sample", split="train", streaming=True)
```
### AWS
**aws-titan-embed-text-v2** - 1024d vectors - generated with AWS Bedrock
```python
from datasets import load_dataset
dataset = load_dataset("weaviate/wiki-sample", "aws-titan-embed-text-v2", split="train", streaming=True)
```
#### Weaviate collection configuration:
```python
from weaviate.classes.config import Configure
client.collections.create(
name="Wiki",
vectorizer_config=[
Configure.NamedVectors.text2vec_aws(
name="main_vector",
model="amazon.titan-embed-text-v2:0",
region="us-east-1", # make sure to use the correct region for you
source_properties=['title', 'text'], # which properties should be used to generate a vector
)
],
)
```
### Cohere
**embed-multilingual-v3** - 768d vectors - generated with Ollama
```python
from datasets import load_dataset
dataset = load_dataset("weaviate/wiki-sample", "cohere-embed-multilingual-v3", split="train", streaming=True)
```
#### Weaviate collection configuration:
```python
from weaviate.classes.config import Configure
client.collections.create(
name="Wiki",
vectorizer_config=[
Configure.NamedVectors.text2vec_cohere(
name="main_vector",
model="embed-multilingual-v3.0",
source_properties=['title', 'text'], # which properties should be used to generate a vector
)
],
)
```
### OpenAI
**text-embedding-3-small** - 1536d vectors - generated with OpenAI
```python
from datasets import load_dataset
dataset = load_dataset("weaviate/wiki-sample", "openai-text-embedding-3-small", split="train", streaming=True)
```
#### Weaviate collection configuration:
```python
from weaviate.classes.config import Configure
client.collections.create(
name="Wiki",
vectorizer_config=[
Configure.NamedVectors.text2vec_openai(
name="main_vector",
model="text-embedding-3-small",
source_properties=['title', 'text'], # which properties should be used to generate a vector
)
],
)
```
**text-embedding-3-large** - 3072d vectors - generated with OpenAI
```python
from datasets import load_dataset
dataset = load_dataset("weaviate/wiki-sample", "openai-text-embedding-3-large", split="train", streaming=True)
```
#### Weaviate collection configuration:
```python
from weaviate.classes.config import Configure
client.collections.create(
name="Wiki",
vectorizer_config=[
Configure.NamedVectors.text2vec_openai(
name="main_vector",
model="text-embedding-3-large",
source_properties=['title', 'text'], # which properties should be used to generate a vector
)
],
)
```
### Snowflake
**snowflake-arctic-embed** - 1024d vectors - generated with Ollama
```python
from datasets import load_dataset
dataset = load_dataset("weaviate/wiki-sample", "snowflake-arctic-embed", split="train", streaming=True)
```
#### Weaviate collection configuration:
```python
from weaviate.classes.config import Configure
client.collections.create(
name="Wiki",
vectorizer_config=[
Configure.NamedVectors.text2vec_ollama(
name="main_vector",
model="snowflake-arctic-embed",
api_endpoint="http://host.docker.internal:11434", # If using Docker
source_properties=["title", "text"],
),
],
)
```
|