pipeline_tag: sentence-similarity | |
tags: | |
- sentence-transformers | |
- feature-extraction | |
- sentence-similarity | |
# {MODEL_NAME} | |
This is a [sentence-transformers](https://www.SBERT.net) model: It maps queries to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search over queries. | |
<!--- Describe your model here --> | |
## Usage (Sentence-Transformers) | |
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: | |
``` | |
pip install -U sentence-transformers | |
``` | |
Then you can use the model like this: | |
```python | |
from sentence_transformers import SentenceTransformer | |
queries = ["flight cost from nyc to la", "ticket prices from nyc to la"] | |
model = SentenceTransformer('{MODEL_NAME}') | |
embeddings = model.encode(queries) | |
print(embeddings) | |
``` | |
## Training | |
The model was trained for 1M steps with a batch size of 1024 at a learning rate of 2e-5 using a cosine learning rate scheduler with 10000 warmup steps. | |
## Full Model Architecture | |
``` | |
SentenceTransformer( | |
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: DataParallel | |
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) | |
(2): Normalize() | |
) | |
``` |