Update README.md
Browse files
README.md
CHANGED
@@ -2622,7 +2622,7 @@ model-index:
|
|
2622 |
## Intended Usage & Model Info
|
2623 |
|
2624 |
`jina-embeddings-v2-small-en` is an English, monolingual **embedding model** supporting **8192 sequence length**.
|
2625 |
-
It is based on a
|
2626 |
The backbone `jina-bert-v2-small-en` is pretrained on the C4 dataset.
|
2627 |
The model is further trained on Jina AI's collection of more than 400 millions of sentence pairs and hard negatives.
|
2628 |
These pairs were obtained from various domains and were carefully selected through a thorough cleaning process.
|
@@ -2633,17 +2633,11 @@ This makes our model useful for a range of use cases, especially when processing
|
|
2633 |
This model has 33 million parameters, which enables lightning-fast and memory efficient inference, while still delivering impressive performance.
|
2634 |
Additionally, we provide the following embedding models:
|
2635 |
|
2636 |
-
**V1 (Based on T5, 512 Seq)**
|
2637 |
-
|
2638 |
-
- [`jina-embeddings-v1-small-en`](https://huggingface.co/jinaai/jina-embedding-s-en-v1): 35 million parameters.
|
2639 |
-
- [`jina-embeddings-v1-base-en`](https://huggingface.co/jinaai/jina-embedding-b-en-v1): 110 million parameters.
|
2640 |
-
- [`jina-embeddings-v1-large-en`](https://huggingface.co/jinaai/jina-embedding-l-en-v1): 330 million parameters.
|
2641 |
-
|
2642 |
-
**V2 (Based on JinaBert, 8k Seq)**
|
2643 |
-
|
2644 |
- [`jina-embeddings-v2-small-en`](https://huggingface.co/jinaai/jina-embeddings-v2-small-en): 33 million parameters **(you are here)**.
|
2645 |
- [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters.
|
2646 |
-
- [`jina-embeddings-v2-
|
|
|
|
|
2647 |
|
2648 |
## Data & Parameters
|
2649 |
|
|
|
2622 |
## Intended Usage & Model Info
|
2623 |
|
2624 |
`jina-embeddings-v2-small-en` is an English, monolingual **embedding model** supporting **8192 sequence length**.
|
2625 |
+
It is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length.
|
2626 |
The backbone `jina-bert-v2-small-en` is pretrained on the C4 dataset.
|
2627 |
The model is further trained on Jina AI's collection of more than 400 millions of sentence pairs and hard negatives.
|
2628 |
These pairs were obtained from various domains and were carefully selected through a thorough cleaning process.
|
|
|
2633 |
This model has 33 million parameters, which enables lightning-fast and memory efficient inference, while still delivering impressive performance.
|
2634 |
Additionally, we provide the following embedding models:
|
2635 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2636 |
- [`jina-embeddings-v2-small-en`](https://huggingface.co/jinaai/jina-embeddings-v2-small-en): 33 million parameters **(you are here)**.
|
2637 |
- [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters.
|
2638 |
+
- [`jina-embeddings-v2-base-zh`](): Chinese-English Bilingual embedding model (releasing soon).
|
2639 |
+
- [`jina-embeddings-v2-base-de`](): German-English Bilingual embedding model (releasing soon).
|
2640 |
+
- [`jina-embeddings-v2-base-es`](): Spanish-English Bilingual embedding model (releasing soon).
|
2641 |
|
2642 |
## Data & Parameters
|
2643 |
|