Edit model card

SILMA STS Arabic Embedding Model 0.1

This is a sentence-transformers model finetuned from silma-ai/silma-embeddding-matryoshka-0.1. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: aubmindlab/bert-base-arabertv02
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then load the model

from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim

model = SentenceTransformer("silma-ai/silma-embeddding-sts-0.1")

Samples

[+] Short Sentence Similarity

Arabic

query = "الطقس اليوم مشمس"
sentence_1 = "الجو اليوم كان مشمسًا ورائعًا"
sentence_2 = "الطقس اليوم غائم"

query_embedding = model.encode(query)

print("sentence_1_similarity:", cos_sim(query_embedding, model.encode(sentence_1))[0][0].tolist())
print("sentence_2_similarity:", cos_sim(query_embedding, model.encode(sentence_2))[0][0].tolist())

# ======= Output
# sentence_1_similarity: 0.42602288722991943
# sentence_2_similarity: 0.10798501968383789
# =======

English

query = "The weather is sunny today"
sentence_1 = "The morning was bright and sunny"
sentence_2 = "it is too cloudy today"

query_embedding = model.encode(query)

print("sentence_1_similarity:", cos_sim(query_embedding, model.encode(sentence_1))[0][0].tolist())
print("sentence_2_similarity:", cos_sim(query_embedding, model.encode(sentence_2))[0][0].tolist())

# ======= Output
# sentence_1_similarity: 0.5796191692352295
# sentence_2_similarity: 0.21948376297950745
# =======

[+] Long Sentence Similarity

Arabic

query = "الكتاب يتحدث عن أهمية الذكاء الاصطناعي في تطوير المجتمعات الحديثة"
sentence_1 = "في هذا الكتاب، يناقش الكاتب كيف يمكن للتكنولوجيا أن تغير العالم"
sentence_2 = "الكاتب يتحدث عن أساليب الطبخ التقليدية في دول البحر الأبيض المتوسط"

query_embedding = model.encode(query)

print("sentence_1_similarity:", cos_sim(query_embedding, model.encode(sentence_1))[0][0].tolist())
print("sentence_2_similarity:", cos_sim(query_embedding, model.encode(sentence_2))[0][0].tolist())

# ======= Output
# sentence_1_similarity: 0.5725120306015015
# sentence_2_similarity: 0.22617210447788239
# =======

English

query = "China said on Saturday it would issue special bonds to help its sputtering economy, signalling a spending spree to bolster banks"
sentence_1 = "The Chinese government announced plans to release special bonds aimed at supporting its struggling economy and stabilizing the banking sector."
sentence_2 = "Several countries are preparing for a global technology summit to discuss advancements in bolster global banks."

query_embedding = model.encode(query)

print("sentence_1_similarity:", cos_sim(query_embedding, model.encode(sentence_1))[0][0].tolist())
print("sentence_2_similarity:", cos_sim(query_embedding, model.encode(sentence_2))[0][0].tolist())

# ======= Output
# sentence_1_similarity: 0.6438770294189453
# sentence_2_similarity: 0.4720292389392853
# =======

[+] Question to Paragraph Matching

Arabic

query = "ما هي فوائد ممارسة الرياضة؟"
sentence_1 = "ممارسة الرياضة بشكل منتظم تساعد على تحسين الصحة العامة واللياقة البدنية"
sentence_2 = "تعليم الأطفال في سن مبكرة يساعدهم على تطوير المهارات العقلية بسرعة"

query_embedding = model.encode(query)

print("sentence_1_similarity:", cos_sim(query_embedding, model.encode(sentence_1))[0][0].tolist())
print("sentence_2_similarity:", cos_sim(query_embedding, model.encode(sentence_2))[0][0].tolist())

# ======= Output
# sentence_1_similarity: 0.6058318614959717
# sentence_2_similarity: 0.006831036880612373
# =======

English

query = "What are the benefits of exercising?"
sentence_1 = "Regular exercise helps improve overall health and physical fitness"
sentence_2 = "Teaching children at an early age helps them develop cognitive skills quickly"

query_embedding = model.encode(query)

print("sentence_1_similarity:", cos_sim(query_embedding, model.encode(sentence_1))[0][0].tolist())
print("sentence_2_similarity:", cos_sim(query_embedding, model.encode(sentence_2))[0][0].tolist())

# ======= Output
# sentence_1_similarity: 0.3593001365661621
# sentence_2_similarity: 0.06493218243122101
# =======

[+] Message to Intent-Name Mapping

Arabic

query = "أرغب في حجز تذكرة طيران من دبي الى القاهرة يوم الثلاثاء القادم"
sentence_1 = "حجز رحلة"
sentence_2 = "إلغاء حجز"

query_embedding = model.encode(query)

print("sentence_1_similarity:", cos_sim(query_embedding, model.encode(sentence_1))[0][0].tolist())
print("sentence_2_similarity:", cos_sim(query_embedding, model.encode(sentence_2))[0][0].tolist())

# ======= Output
# sentence_1_similarity: 0.4646468162536621
# sentence_2_similarity: 0.19563665986061096
# =======

English

query = "Please send an email to all of the managers"
sentence_1 = "send email"
sentence_2 = "read inbox emails"

query_embedding = model.encode(query)

print("sentence_1_similarity:", cos_sim(query_embedding, model.encode(sentence_1))[0][0].tolist())
print("sentence_2_similarity:", cos_sim(query_embedding, model.encode(sentence_2))[0][0].tolist())

# ======= Output
# sentence_1_similarity: 0.6485046744346619
# sentence_2_similarity: 0.43906497955322266
# =======

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.8515
spearman_cosine 0.8559
pearson_manhattan 0.8220
spearman_manhattan 0.8397
pearson_euclidean 0.8231
spearman_euclidean 0.8444
pearson_dot 0.8515
spearman_dot 0.8557

Training Details

This model was fine-tuned via 2 phases:

Phase 1:

In phase 1, we curated a dataset silma-ai/silma-arabic-triplets-dataset-v1.0 which contains more than 2.25M records of (anchor, positive and negative) Arabic/English samples. Only the first 600 samples were taken to be the eval dataset, while the rest were used for fine-tuning.

Phase 1 produces a finetuned Matryoshka model based on aubmindlab/bert-base-arabertv02 with the following hyperparameters:

  • per_device_train_batch_size: 250
  • per_device_eval_batch_size: 10
  • learning_rate: 1e-05
  • num_train_epochs: 3
  • bf16: True
  • dataloader_drop_last: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

training script

Phase 2:

In phase 2, we curated a dataset silma-ai/silma-arabic-english-sts-dataset-v1.0 which contains more than 30k records of (sentence1, sentence2 and similarity-score) Arabic/English samples. Only the first 100 samples were taken to be the eval dataset, while the rest was used for fine-tuning.

Phase 2 produces a finetuned STS model based on the model from phase 1, with the following hyperparameters:

  • eval_strategy: steps
  • per_device_train_batch_size: 250
  • per_device_eval_batch_size: 10
  • learning_rate: 1e-06
  • num_train_epochs: 10
  • bf16: True
  • dataloader_drop_last: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

training script

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.2.0
  • Transformers: 4.45.2
  • PyTorch: 2.3.1
  • Accelerate: 1.0.1
  • Datasets: 3.0.1
  • Tokenizers: 0.20.1

Citation:

BibTeX:

@misc{silma2024embedding,
  author = {Abu Bakr Soliman, Karim Ouda, SILMA AI},
  title = {SILMA Embedding STS 0.1},
  year = {2024},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/silma-ai/silma-embeddding-sts-0.1}},
}

APA:

Abu Bakr Soliman, Karim Ouda, SILMA AI. (2024). SILMA Embedding STS 0.1 [Model]. Hugging Face. https://huggingface.co/silma-ai/silma-embeddding-sts-0.1
Downloads last month
427
Safetensors
Model size
135M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for silma-ai/silma-embeddding-sts-v0.1

Collection including silma-ai/silma-embeddding-sts-v0.1

Evaluation results