File size: 3,194 Bytes
74e01dc 39fa12e 74e01dc 17e8334 39fa12e 17e8334 39fa12e 33b0354 39fa12e 17e8334 39fa12e 17e8334 39fa12e e316d3e 39fa12e ae41c8d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
license: apache-2.0
language:
- ar
pipeline_tag: text-classification
library_name: transformers
base_model:
- Omartificial-Intelligence-Space/Arabic-Triplet-Matryoshka-V2
tags:
- reranking
- sentence-transformers
datasets:
- unicamp-dl/mmarco
---
# Namaa-Reranker-v1 🚀✨
**NAMAA-space** releases **Namaa-Reranker-v1**, a high-performance model fine-tuned on [unicamp-dl/mmarco](https://huggingface.co/datasets/unicamp-dl/mmarco) to elevate Arabic document retrieval and ranking to new heights! 📚🇸🇦
This model is designed to **improve search relevance** of **arabic** documents by accurately ranking documents based on their contextual fit for a given query.
## Key Features 🔑
- **Optimized for Arabic**: Built on the highly performant [Omartificial-Intelligence-Space/Arabic-Triplet-Matryoshka-V2](https://huggingface.co/Omartificial-Intelligence-Space/Arabic-Triplet-Matryoshka-V2) with exclusivly rich Arabic data.
- **Advanced Document Ranking**: Ranks results with precision, perfect for search engines, recommendation systems, and question-answering applications.
- **State-of-the-Art Performance**: Achieves excellent performance compared to famous rerankers(See [Evaluation](https://huggingface.co/NAMAA-Space/Rerankerv1#evaluation)), ensuring reliable relevance and precision.
## Example Use Cases 💼
- **Retrieval Augmented Generation**: Improve search result relevance for Arabic content.
- **Content Recommendation**: Deliver top-tier Arabic content suggestions.
- **Question Answering**: Boost answer retrieval quality in Arabic-focused systems.
## Usage
# Within sentence-transformers
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('NAMAA-Space/Namaa-Reranker-v1', max_length=512)
Query = 'كيف يمكن استخدام التعلم العميق في معالجة الصور الطبية؟'
Paragraph1 = 'التعلم العميق يساعد في تحليل الصور الطبية وتشخيص الأمراض'
Paragraph2 = 'الذكاء الاصطناعي يستخدم في تحسين الإنتاجية في الصناعات'
scores = model.predict([(Query, Paragraph1), (Query, Paragraph2)])
```
## Evaluation
We evaluate our model on two different datasets using the metrics **MAP**, **MRR** and **NDCG@10**:
The purpose of this evaluation is to highlight the performance of our model with regards to: Relevant/Irrelevant labels and positive/multiple negatives documents:
Dataset 1: [NAMAA-Space/Ar-Reranking-Eval](https://huggingface.co/datasets/NAMAA-Space/Ar-Reranking-Eval)
![Plot](https://huggingface.co/NAMAA-Space/Namaa-Reranker-v1/resolve/main/Dataset1_Evaluation.jpg)
Dataset 2: [NAMAA-Space/Arabic-Reranking-Triplet-5-Eval](https://huggingface.co/datasets/NAMAA-Space/Arabic-Reranking-Triplet-5-Eval)
![Plot](https://huggingface.co/NAMAA-Space/Namaa-Reranker-v1/resolve/main/Dataset2_Evaluation.jpg)
As seen, The model performs extremly well in comparison to other famous rerankers.
WIP: More comparisons and evaluation on arabic datasets. |