Edit model card

SentenceTransformer based on BAAI/bge-m3

This is a sentence-transformers model finetuned from BAAI/bge-m3. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-m3
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("adriansanz/sitgrsBAAIbge-m3-300824")
# Run inference
sentences = [
    'Per valorar l’interès de la proposta es tindrà en compte: Tipus d’activitat Antecedents Dates de celebració Accions de promoció dutes a terme des de l’organització Nivell de molèstia previst i interferència en la vida quotidiana.',
    "Quin és el paper de les accions de promoció en les subvencions per a projectes i activitats de l'àmbit turístic?",
    "Quin és el benefici de la realització d'exposicions al Centre Cultural Miramar?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.0591
cosine_accuracy@3 0.1276
cosine_accuracy@5 0.1735
cosine_accuracy@10 0.2861
cosine_precision@1 0.0591
cosine_precision@3 0.0425
cosine_precision@5 0.0347
cosine_precision@10 0.0286
cosine_recall@1 0.0591
cosine_recall@3 0.1276
cosine_recall@5 0.1735
cosine_recall@10 0.2861
cosine_ndcg@10 0.1537
cosine_mrr@10 0.1139
cosine_map@100 0.1398

Information Retrieval

Metric Value
cosine_accuracy@1 0.0591
cosine_accuracy@3 0.1257
cosine_accuracy@5 0.1801
cosine_accuracy@10 0.2946
cosine_precision@1 0.0591
cosine_precision@3 0.0419
cosine_precision@5 0.036
cosine_precision@10 0.0295
cosine_recall@1 0.0591
cosine_recall@3 0.1257
cosine_recall@5 0.1801
cosine_recall@10 0.2946
cosine_ndcg@10 0.1564
cosine_mrr@10 0.1149
cosine_map@100 0.1405

Information Retrieval

Metric Value
cosine_accuracy@1 0.0591
cosine_accuracy@3 0.1257
cosine_accuracy@5 0.1707
cosine_accuracy@10 0.2983
cosine_precision@1 0.0591
cosine_precision@3 0.0419
cosine_precision@5 0.0341
cosine_precision@10 0.0298
cosine_recall@1 0.0591
cosine_recall@3 0.1257
cosine_recall@5 0.1707
cosine_recall@10 0.2983
cosine_ndcg@10 0.1571
cosine_mrr@10 0.115
cosine_map@100 0.1397

Information Retrieval

Metric Value
cosine_accuracy@1 0.0516
cosine_accuracy@3 0.121
cosine_accuracy@5 0.1679
cosine_accuracy@10 0.2889
cosine_precision@1 0.0516
cosine_precision@3 0.0403
cosine_precision@5 0.0336
cosine_precision@10 0.0289
cosine_recall@1 0.0516
cosine_recall@3 0.121
cosine_recall@5 0.1679
cosine_recall@10 0.2889
cosine_ndcg@10 0.1498
cosine_mrr@10 0.1082
cosine_map@100 0.1338

Information Retrieval

Metric Value
cosine_accuracy@1 0.0516
cosine_accuracy@3 0.1173
cosine_accuracy@5 0.1717
cosine_accuracy@10 0.2889
cosine_precision@1 0.0516
cosine_precision@3 0.0391
cosine_precision@5 0.0343
cosine_precision@10 0.0289
cosine_recall@1 0.0516
cosine_recall@3 0.1173
cosine_recall@5 0.1717
cosine_recall@10 0.2889
cosine_ndcg@10 0.1488
cosine_mrr@10 0.1069
cosine_map@100 0.1328

Information Retrieval

Metric Value
cosine_accuracy@1 0.0507
cosine_accuracy@3 0.1126
cosine_accuracy@5 0.1642
cosine_accuracy@10 0.2824
cosine_precision@1 0.0507
cosine_precision@3 0.0375
cosine_precision@5 0.0328
cosine_precision@10 0.0282
cosine_recall@1 0.0507
cosine_recall@3 0.1126
cosine_recall@5 0.1642
cosine_recall@10 0.2824
cosine_ndcg@10 0.1449
cosine_mrr@10 0.104
cosine_map@100 0.1306

Training Details

Training Dataset

Unnamed Dataset

  • Size: 9,593 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 5 tokens
    • mean: 49.28 tokens
    • max: 178 tokens
    • min: 10 tokens
    • mean: 21.16 tokens
    • max: 41 tokens
  • Samples:
    positive anchor
    Mitjançant aquest tràmit la persona interessada posa en coneixement de l'Ajuntament l’inici o modificació substancial d’una activitat econòmica, i hi adjunta el certificat tècnic acreditatiu del compliment dels requisits necessaris que estableix la normativa vigent per a l‘exercici de l’activitat. Quin és el resultat esperat després de presentar el certificat tècnic en el tràmit de comunicació d'inici d'activitat?
    L'Ajuntament de Sitges ofereix a aquelles famílies que acompleixin els requisits establerts, ajuts per al pagament de la quota del servei i de la quota del menjador dels infants matriculats a les Llars d'Infants Municipals ( 0-3 anys). Quins són els requisits per a beneficiar-se dels ajuts de l'Ajuntament de Sitges?
    Les entitats o associacions culturals han de presentar la sol·licitud de subvenció dins del termini establert per l'Ajuntament de Sitges. Quin és el termini per a presentar una sol·licitud de subvenció per a un projecte cultural?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            1024,
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 5
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.2
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.2
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_1024_cosine_map@100 dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.2667 10 3.5318 - - - - - -
0.5333 20 2.3744 - - - - - -
0.8 30 1.6587 - - - - - -
0.9867 37 - 0.1350 0.1317 0.1349 0.1341 0.1207 0.1322
1.0667 40 1.1513 - - - - - -
1.3333 50 1.0055 - - - - - -
1.6 60 0.7369 - - - - - -
1.8667 70 0.4855 - - - - - -
2.0 75 - 0.1366 0.1370 0.1376 0.1345 0.1290 0.1355
2.1333 80 0.4362 - - - - - -
2.4 90 0.3943 - - - - - -
2.6667 100 0.3495 - - - - - -
2.9333 110 0.2138 - - - - - -
2.9867 112 - 0.1364 0.1342 0.1374 0.1361 0.1256 0.1367
3.2 120 0.2176 - - - - - -
3.4667 130 0.2513 - - - - - -
3.7333 140 0.2163 - - - - - -
4.0 150 0.15 0.1401 0.1308 0.1332 0.1396 0.1279 0.1396
4.2667 160 0.1613 - - - - - -
4.5333 170 0.1955 - - - - - -
4.8 180 0.1514 - - - - - -
4.9333 185 - 0.1398 0.1328 0.1338 0.1397 0.1306 0.1405
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.44.2
  • PyTorch: 2.4.0+cu121
  • Accelerate: 0.34.0.dev0
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
9
Safetensors
Model size
568M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for adriansanz/ST-tramits-sitges-002-5ep

Base model

BAAI/bge-m3
Finetuned
(123)
this model

Evaluation results