Hi thanks for your work, have little question
#1
by
notzero
- opened
I have question : in the embeddings, why you are not use
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
sentence_embeddings = model(**inputs)[0][:, 0]
(from https://huggingface.co/hooman650/bge-large-en-v1.5-onnx-o4)
because your example code resulted to another dimension for embedding [17.1024]. But your code in https://huggingface.co/hooman650/bge-large-en-v1.5-onnx-o4 is correct [1024]
Thanks
Ok cool
notzero
changed discussion status to
closed