alime-reranker-large-zh
The alime reranker model.
Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
pairs = [["西湖在哪?", "西湖风景名胜区位于浙江省杭州市"],["今天天气不错","你吓死我了"]]
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
tokenizer = AutoTokenizer.from_pretrained("Pristinenlp/alime-reranker-large-zh")
model = AutoModelForSequenceClassification.from_pretrained("Pristinenlp/alime-reranker-large-zh").to(device)
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512).to(device)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores.tolist())
- Downloads last month
- 124
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Spaces using Pristinenlp/alime-reranker-large-zh 2
Evaluation results
- map on MTEB CMedQAv1test set self-reported82.322
- mrr on MTEB CMedQAv1test set self-reported84.914
- map on MTEB CMedQAv2test set self-reported84.086
- mrr on MTEB CMedQAv2test set self-reported86.901
- map on MTEB MMarcoRerankingself-reported35.497
- mrr on MTEB MMarcoRerankingself-reported35.292
- map on MTEB T2Rerankingself-reported68.258
- mrr on MTEB T2Rerankingself-reported78.642