Edit model card

SentenceTransformer based on sentence-transformers/LaBSE

This is a sentence-transformers model finetuned from sentence-transformers/LaBSE on the Omartificial-Intelligence-Space/arabic-n_li-triplet dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/LaBSE
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • Omartificial-Intelligence-Space/arabic-n_li-triplet

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
  (3): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Omartificial-Intelligence-Space/Arabic-labse")
# Run inference
sentences = [
    'يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة.',
    'ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه',
    'الشاب نائم بينما الأم تقود ابنتها إلى الحديقة',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.7269
spearman_cosine 0.7225
pearson_manhattan 0.7259
spearman_manhattan 0.721
pearson_euclidean 0.726
spearman_euclidean 0.7225
pearson_dot 0.7269
spearman_dot 0.7225
pearson_max 0.7269
spearman_max 0.7225

Semantic Similarity

Metric Value
pearson_cosine 0.7268
spearman_cosine 0.7224
pearson_manhattan 0.7241
spearman_manhattan 0.7195
pearson_euclidean 0.7248
spearman_euclidean 0.7213
pearson_dot 0.7253
spearman_dot 0.7205
pearson_max 0.7268
spearman_max 0.7224

Semantic Similarity

Metric Value
pearson_cosine 0.7283
spearman_cosine 0.7264
pearson_manhattan 0.7228
spearman_manhattan 0.7181
pearson_euclidean 0.7251
spearman_euclidean 0.7215
pearson_dot 0.7243
spearman_dot 0.7221
pearson_max 0.7283
spearman_max 0.7264

Semantic Similarity

Metric Value
pearson_cosine 0.7102
spearman_cosine 0.7104
pearson_manhattan 0.7135
spearman_manhattan 0.7089
pearson_euclidean 0.7172
spearman_euclidean 0.713
pearson_dot 0.6778
spearman_dot 0.6746
pearson_max 0.7172
spearman_max 0.713

Semantic Similarity

Metric Value
pearson_cosine 0.6931
spearman_cosine 0.6982
pearson_manhattan 0.6971
spearman_manhattan 0.6942
pearson_euclidean 0.7013
spearman_euclidean 0.6987
pearson_dot 0.6377
spearman_dot 0.6345
pearson_max 0.7013
spearman_max 0.6987

Semantic Similarity

Metric Value
pearson_cosine 0.8144
spearman_cosine 0.8205
pearson_manhattan 0.8203
spearman_manhattan 0.8204
pearson_euclidean 0.8202
spearman_euclidean 0.8205
pearson_dot 0.8144
spearman_dot 0.8205
pearson_max 0.8203
spearman_max 0.8205

Semantic Similarity

Metric Value
pearson_cosine 0.8143
spearman_cosine 0.8212
pearson_manhattan 0.8217
spearman_manhattan 0.8216
pearson_euclidean 0.8216
spearman_euclidean 0.8219
pearson_dot 0.8097
spearman_dot 0.8147
pearson_max 0.8217
spearman_max 0.8219

Semantic Similarity

Metric Value
pearson_cosine 0.8076
spearman_cosine 0.8159
pearson_manhattan 0.8209
spearman_manhattan 0.8197
pearson_euclidean 0.821
spearman_euclidean 0.8203
pearson_dot 0.7871
spearman_dot 0.7875
pearson_max 0.821
spearman_max 0.8203

Semantic Similarity

Metric Value
pearson_cosine 0.8024
spearman_cosine 0.8118
pearson_manhattan 0.8189
spearman_manhattan 0.8181
pearson_euclidean 0.8198
spearman_euclidean 0.8185
pearson_dot 0.7513
spearman_dot 0.7428
pearson_max 0.8198
spearman_max 0.8185

Semantic Similarity

Metric Value
pearson_cosine 0.7855
spearman_cosine 0.7949
pearson_manhattan 0.806
spearman_manhattan 0.8041
pearson_euclidean 0.8088
spearman_euclidean 0.806
pearson_dot 0.6778
spearman_dot 0.6616
pearson_max 0.8088
spearman_max 0.806

Training Details

Training Dataset

Omartificial-Intelligence-Space/arabic-n_li-triplet

  • Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
  • Size: 557,850 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 9.99 tokens
    • max: 51 tokens
    • min: 4 tokens
    • mean: 12.44 tokens
    • max: 49 tokens
    • min: 5 tokens
    • mean: 13.82 tokens
    • max: 49 tokens
  • Samples:
    anchor positive negative
    شخص على حصان يقفز فوق طائرة معطلة شخص في الهواء الطلق، على حصان. شخص في مطعم، يطلب عجة.
    أطفال يبتسمون و يلوحون للكاميرا هناك أطفال حاضرون الاطفال يتجهمون
    صبي يقفز على لوح التزلج في منتصف الجسر الأحمر. الفتى يقوم بخدعة التزلج الصبي يتزلج على الرصيف
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Evaluation Dataset

Omartificial-Intelligence-Space/arabic-n_li-triplet

  • Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
  • Size: 6,584 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 19.71 tokens
    • max: 100 tokens
    • min: 4 tokens
    • mean: 9.37 tokens
    • max: 38 tokens
    • min: 4 tokens
    • mean: 10.49 tokens
    • max: 34 tokens
  • Samples:
    anchor positive negative
    امرأتان يتعانقان بينما يحملان حزمة إمرأتان يحملان حزمة الرجال يتشاجرون خارج مطعم
    طفلين صغيرين يرتديان قميصاً أزرق، أحدهما يرتدي الرقم 9 والآخر يرتدي الرقم 2 يقفان على خطوات خشبية في الحمام ويغسلان أيديهما في المغسلة. طفلين يرتديان قميصاً مرقماً يغسلون أيديهم طفلين يرتديان سترة يذهبان إلى المدرسة
    رجل يبيع الدونات لعميل خلال معرض عالمي أقيم في مدينة أنجليس رجل يبيع الدونات لعميل امرأة تشرب قهوتها في مقهى صغير
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss sts-test-128_spearman_cosine sts-test-256_spearman_cosine sts-test-512_spearman_cosine sts-test-64_spearman_cosine sts-test-768_spearman_cosine
None 0 - 0.7104 0.7264 0.7224 0.6982 0.7225
0.0229 200 13.1738 - - - - -
0.0459 400 8.8127 - - - - -
0.0688 600 8.0984 - - - - -
0.0918 800 7.2984 - - - - -
0.1147 1000 7.5749 - - - - -
0.1377 1200 7.1292 - - - - -
0.1606 1400 6.6146 - - - - -
0.1835 1600 6.6523 - - - - -
0.2065 1800 6.1095 - - - - -
0.2294 2000 6.0841 - - - - -
0.2524 2200 6.3024 - - - - -
0.2753 2400 6.1941 - - - - -
0.2983 2600 6.1686 - - - - -
0.3212 2800 5.8317 - - - - -
0.3442 3000 6.0597 - - - - -
0.3671 3200 5.7832 - - - - -
0.3900 3400 5.7088 - - - - -
0.4130 3600 5.6988 - - - - -
0.4359 3800 5.5268 - - - - -
0.4589 4000 5.5543 - - - - -
0.4818 4200 5.3152 - - - - -
0.5048 4400 5.2894 - - - - -
0.5277 4600 5.1805 - - - - -
0.5506 4800 5.4559 - - - - -
0.5736 5000 5.3836 - - - - -
0.5965 5200 5.2626 - - - - -
0.6195 5400 5.2511 - - - - -
0.6424 5600 5.3308 - - - - -
0.6654 5800 5.2264 - - - - -
0.6883 6000 5.2881 - - - - -
0.7113 6200 5.1349 - - - - -
0.7342 6400 5.0872 - - - - -
0.7571 6600 4.5515 - - - - -
0.7801 6800 3.4312 - - - - -
0.8030 7000 3.1008 - - - - -
0.8260 7200 2.9582 - - - - -
0.8489 7400 2.8153 - - - - -
0.8719 7600 2.7214 - - - - -
0.8948 7800 2.5392 - - - - -
0.9177 8000 2.584 - - - - -
0.9407 8200 2.5384 - - - - -
0.9636 8400 2.4937 - - - - -
0.9866 8600 2.4155 - - - - -
1.0 8717 - 0.8118 0.8159 0.8212 0.7949 0.8205

Framework Versions

  • Python: 3.9.18
  • Sentence Transformers: 3.0.1
  • Transformers: 4.40.0
  • PyTorch: 2.2.2+cu121
  • Accelerate: 0.26.1
  • Datasets: 2.19.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Acknowledgments

The author would like to thank Prince Sultan University for their invaluable support in this project. Their contributions and resources have been instrumental in the development and fine-tuning of these models.

## Citation

If you use the Arabic Matryoshka Embeddings Model, please cite it as follows:

@misc{nacar2024enhancingsemanticsimilarityunderstanding,
      title={Enhancing Semantic Similarity Understanding in Arabic NLP with Nested Embedding Learning}, 
      author={Omer Nacar and Anis Koubaa},
      year={2024},
      eprint={2407.21139},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2407.21139}, 
}
Downloads last month
170
Safetensors
Model size
471M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for Omartificial-Intelligence-Space/Arabic-labse-Matryoshka

Finetuned
(26)
this model

Dataset used to train Omartificial-Intelligence-Space/Arabic-labse-Matryoshka

Spaces using Omartificial-Intelligence-Space/Arabic-labse-Matryoshka 4

Collection including Omartificial-Intelligence-Space/Arabic-labse-Matryoshka

Evaluation results