Edit model card

SentenceTransformer based on BAAI/bge-m3

This is a sentence-transformers model finetuned from BAAI/bge-m3. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-m3
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("adriansanz/sitgrsBAAIbge-m3-290824")
# Run inference
sentences = [
    'Les entitats inscrites en el Registre resten obligades a comunicar a l’Ajuntament qualsevol modificació en les seves dades registrals, podent sol·licitar la seva cancel·lació o comunicant la seva dissolució.',
    "Quin és el procediment per cancel·lar la inscripció d'una entitat al Registre municipal d'entitats?",
    'Quin és el paper de les entitats de protecció dels animals en la gestió de les colònies urbanes felines?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.0862
cosine_accuracy@3 0.2155
cosine_accuracy@5 0.3276
cosine_accuracy@10 0.5108
cosine_precision@1 0.0862
cosine_precision@3 0.0718
cosine_precision@5 0.0655
cosine_precision@10 0.0511
cosine_recall@1 0.0862
cosine_recall@3 0.2155
cosine_recall@5 0.3276
cosine_recall@10 0.5108
cosine_ndcg@10 0.264
cosine_mrr@10 0.1897
cosine_map@100 0.2151

Information Retrieval

Metric Value
cosine_accuracy@1 0.0841
cosine_accuracy@3 0.2091
cosine_accuracy@5 0.319
cosine_accuracy@10 0.5
cosine_precision@1 0.0841
cosine_precision@3 0.0697
cosine_precision@5 0.0638
cosine_precision@10 0.05
cosine_recall@1 0.0841
cosine_recall@3 0.2091
cosine_recall@5 0.319
cosine_recall@10 0.5
cosine_ndcg@10 0.2595
cosine_mrr@10 0.1867
cosine_map@100 0.2132

Information Retrieval

Metric Value
cosine_accuracy@1 0.0862
cosine_accuracy@3 0.2112
cosine_accuracy@5 0.3211
cosine_accuracy@10 0.5129
cosine_precision@1 0.0862
cosine_precision@3 0.0704
cosine_precision@5 0.0642
cosine_precision@10 0.0513
cosine_recall@1 0.0862
cosine_recall@3 0.2112
cosine_recall@5 0.3211
cosine_recall@10 0.5129
cosine_ndcg@10 0.2647
cosine_mrr@10 0.1899
cosine_map@100 0.2155

Information Retrieval

Metric Value
cosine_accuracy@1 0.0819
cosine_accuracy@3 0.2047
cosine_accuracy@5 0.306
cosine_accuracy@10 0.5043
cosine_precision@1 0.0819
cosine_precision@3 0.0682
cosine_precision@5 0.0612
cosine_precision@10 0.0504
cosine_recall@1 0.0819
cosine_recall@3 0.2047
cosine_recall@5 0.306
cosine_recall@10 0.5043
cosine_ndcg@10 0.2555
cosine_mrr@10 0.1808
cosine_map@100 0.2066

Information Retrieval

Metric Value
cosine_accuracy@1 0.0841
cosine_accuracy@3 0.2004
cosine_accuracy@5 0.3147
cosine_accuracy@10 0.4914
cosine_precision@1 0.0841
cosine_precision@3 0.0668
cosine_precision@5 0.0629
cosine_precision@10 0.0491
cosine_recall@1 0.0841
cosine_recall@3 0.2004
cosine_recall@5 0.3147
cosine_recall@10 0.4914
cosine_ndcg@10 0.2517
cosine_mrr@10 0.1795
cosine_map@100 0.2058

Information Retrieval

Metric Value
cosine_accuracy@1 0.0797
cosine_accuracy@3 0.2026
cosine_accuracy@5 0.3017
cosine_accuracy@10 0.4957
cosine_precision@1 0.0797
cosine_precision@3 0.0675
cosine_precision@5 0.0603
cosine_precision@10 0.0496
cosine_recall@1 0.0797
cosine_recall@3 0.2026
cosine_recall@5 0.3017
cosine_recall@10 0.4957
cosine_ndcg@10 0.2527
cosine_mrr@10 0.1796
cosine_map@100 0.2058

Training Details

Training Dataset

Unnamed Dataset

  • Size: 4,173 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 8 tokens
    • mean: 48.75 tokens
    • max: 125 tokens
    • min: 10 tokens
    • mean: 21.07 tokens
    • max: 47 tokens
  • Samples:
    positive anchor
    Els ajuts per a la realització d'activitats en el lleure esportiu estan destinats a les entitats sense ànim de lucre que desenvolupen activitats esportives i de lleure. Quins són els sectors que es beneficien dels ajuts?
    En el certificat s'indiquen les dades de planejament vigent, classificació del sòl, qualificació urbanística, condicions de l’edificació i usos admesos referides a una finca o solar concreta. Quin és el contingut de les condicions de l'edificació en el certificat d'aprofitament urbanístic?
    Aportació de documentació. Ajuts per compensar la disminució d'ingressos de les empreses o establiments del sector de l'hosteleria i restauració afectats per les mesures adoptades per la situació de crisis provocada pel SARS-CoV2 Quin és el paper dels ajuts en la situació de crisis?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            1024,
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • num_train_epochs: 10
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.2
  • bf16: True
  • tf32: False
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.2
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: False
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_1024_cosine_map@100 dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.6130 10 3.0594 - - - - - -
0.9808 16 - 0.2047 0.1922 0.2020 0.2016 0.1774 0.2115
1.2261 20 1.525 - - - - - -
1.8391 30 0.7434 - - - - - -
1.9617 32 - 0.2186 0.2003 0.2102 0.2092 0.1870 0.2101
2.4521 40 0.4451 - - - - - -
2.9425 48 - 0.2083 0.2054 0.2091 0.2118 0.2009 0.2140
3.0651 50 0.2518 - - - - - -
3.6782 60 0.1801 - - - - - -
3.9847 65 - 0.2135 0.2071 0.2037 0.2115 0.2030 0.2191
4.2912 70 0.1483 - - - - - -
4.9042 80 0.0893 - - - - - -
4.9655 81 - 0.2066 0.2053 0.2057 0.2137 0.1982 0.2176
5.5172 90 0.0748 - - - - - -
5.9464 97 - 0.2171 0.2113 0.2086 0.2178 0.2120 0.2193
6.1303 100 0.064 - - - - - -
6.7433 110 0.0458 - - - - - -
6.9885 114 - 0.2294 0.2132 0.2151 0.2227 0.2054 0.2138
7.3563 120 0.0436 - - - - - -
7.9693 130 0.0241 0.2133 0.2083 0.2096 0.2138 0.2080 0.2124
8.5824 140 0.021 - - - - - -
8.9502 146 - 0.216 0.2074 0.2081 0.2162 0.2094 0.2177
9.1954 150 0.0237 - - - - - -
9.8084 160 0.0145 0.2151 0.2058 0.2066 0.2155 0.2058 0.2132
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.42.4
  • PyTorch: 2.4.0+cu121
  • Accelerate: 0.34.0.dev0
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
41
Safetensors
Model size
568M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for adriansanz/sitgrsBAAIbge-m3-290824

Base model

BAAI/bge-m3
Finetuned
this model

Evaluation results