Sentence Similarity
sentence-transformers
PyTorch
Transformers
English
t5
text-embedding
embeddings
information-retrieval
beir
text-classification
language-model
text-clustering
text-semantic-similarity
text-evaluation
prompt-retrieval
text-reranking
feature-extraction
English
Sentence Similarity
natural_questions
ms_marco
fever
hotpot_qa
mteb
Eval Results
Half precision?
#2
by
csinva
- opened
Thanks for sharing this wonderful model! I was wondering -- is there any way to use this model in half-precision (float16)?
The standard model.half() does not seem to work.
Hi, Thanks a lot for your interest in the INSTRUCTOR model!
You may use the INSTRUCTOR model to embed the texts with the half-precision:
from InstructorEmbedding import INSTRUCTOR
sentences_a = [['Represent the Science sentence: ','Parton energy loss in QCD matter'], ['Represent the Financial statement: ','The Federal Reserve on Wednesday raised its benchmark interest rate.']]
embeddings_a = model.encode(sentences_a,convert_to_tensor=True).half()
This should reduce the memory requirement to use the xl-model.
For more detailed information, you may refer to the following two issues:
https://github.com/UKPLab/sentence-transformers/issues/425
https://github.com/UKPLab/sentence-transformers/issues/822
Hope this helps! Feel free to add any further question or comment!