Tom Aarsen
commited on
Commit
·
dfd0c04
1
Parent(s):
a261892
Also link to the training script
Browse files
README.md
CHANGED
@@ -8464,9 +8464,9 @@ model-index:
|
|
8464 |
|
8465 |
This is a [sentence-transformers](https://www.SBERT.net) model trained on the [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq), [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1), [squad](https://huggingface.co/datasets/sentence-transformers/squad), [s2orc](https://huggingface.co/datasets/sentence-transformers/s2orc), [allnli](https://huggingface.co/datasets/sentence-transformers/all-nli), [paq](https://huggingface.co/datasets/sentence-transformers/paq), [trivia_qa](https://huggingface.co/datasets/sentence-transformers/trivia-qa), [msmarco_10m](https://huggingface.co/datasets/bclavie/msmarco-10m-triplets), [swim_ir](https://huggingface.co/datasets/nthakur/swim-ir-monolingual), [pubmedqa](https://huggingface.co/datasets/sentence-transformers/pubmedqa), [miracl](https://huggingface.co/datasets/sentence-transformers/miracl), [mldr](https://huggingface.co/datasets/sentence-transformers/mldr) and [mr_tydi](https://huggingface.co/datasets/sentence-transformers/mr-tydi) datasets. It maps sentences & paragraphs to a 1024-dimensional dense vector space and is designed to be used for semantic search.
|
8466 |
|
8467 |
-
This model was trained with a [Matryoshka loss](https://huggingface.co/blog/matryoshka), allowing you to truncate the embeddings for faster retrieval at minimal performance costs.
|
8468 |
-
|
8469 |
-
See [
|
8470 |
|
8471 |
## Model Details
|
8472 |
|
|
|
8464 |
|
8465 |
This is a [sentence-transformers](https://www.SBERT.net) model trained on the [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq), [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1), [squad](https://huggingface.co/datasets/sentence-transformers/squad), [s2orc](https://huggingface.co/datasets/sentence-transformers/s2orc), [allnli](https://huggingface.co/datasets/sentence-transformers/all-nli), [paq](https://huggingface.co/datasets/sentence-transformers/paq), [trivia_qa](https://huggingface.co/datasets/sentence-transformers/trivia-qa), [msmarco_10m](https://huggingface.co/datasets/bclavie/msmarco-10m-triplets), [swim_ir](https://huggingface.co/datasets/nthakur/swim-ir-monolingual), [pubmedqa](https://huggingface.co/datasets/sentence-transformers/pubmedqa), [miracl](https://huggingface.co/datasets/sentence-transformers/miracl), [mldr](https://huggingface.co/datasets/sentence-transformers/mldr) and [mr_tydi](https://huggingface.co/datasets/sentence-transformers/mr-tydi) datasets. It maps sentences & paragraphs to a 1024-dimensional dense vector space and is designed to be used for semantic search.
|
8466 |
|
8467 |
+
* **Matryoshka:** This model was trained with a [Matryoshka loss](https://huggingface.co/blog/matryoshka), allowing you to truncate the embeddings for faster retrieval at minimal performance costs.
|
8468 |
+
* **Evaluations:** See [Evaluations](#evaluation) for details on performance on NanoBEIR, embedding speed, and Matryoshka dimensionality truncation.
|
8469 |
+
* **Training Script:** See [train.py](train.py) for the training script used to train this model from scratch.
|
8470 |
|
8471 |
## Model Details
|
8472 |
|