Edit model card

SentenceTransformer based on BAAI/bge-m3

This is a sentence-transformers model finetuned from BAAI/bge-m3 on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-m3
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("adriansanz/ST-tramits-SQV-007-5ep")
# Run inference
sentences = [
    'L’Ajuntament de Sant Quirze del Vallès reconeix un dret preferent al titular del dret funerari sobre la corresponent sepultura o al successor o causahavent de l’anterior titular d’aquest dret, que permet adquirir de nou el dret funerari referit, sobre la mateixa sepultura, un cop el dret atorgat ha exhaurit el termini de vigència',
    'Quan es pot adquirir de nou el dret funerari?',
    'Quin és el paper del cens electoral en les eleccions?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.1017
cosine_accuracy@3 0.2771
cosine_accuracy@5 0.368
cosine_accuracy@10 0.4827
cosine_precision@1 0.1017
cosine_precision@3 0.0924
cosine_precision@5 0.0736
cosine_precision@10 0.0483
cosine_recall@1 0.1017
cosine_recall@3 0.2771
cosine_recall@5 0.368
cosine_recall@10 0.4827
cosine_ndcg@10 0.2757
cosine_mrr@10 0.2113
cosine_map@100 0.2287

Information Retrieval

Metric Value
cosine_accuracy@1 0.119
cosine_accuracy@3 0.29
cosine_accuracy@5 0.3658
cosine_accuracy@10 0.4957
cosine_precision@1 0.119
cosine_precision@3 0.0967
cosine_precision@5 0.0732
cosine_precision@10 0.0496
cosine_recall@1 0.119
cosine_recall@3 0.29
cosine_recall@5 0.3658
cosine_recall@10 0.4957
cosine_ndcg@10 0.2892
cosine_mrr@10 0.2253
cosine_map@100 0.2428

Information Retrieval

Metric Value
cosine_accuracy@1 0.1082
cosine_accuracy@3 0.2662
cosine_accuracy@5 0.3636
cosine_accuracy@10 0.5065
cosine_precision@1 0.1082
cosine_precision@3 0.0887
cosine_precision@5 0.0727
cosine_precision@10 0.0506
cosine_recall@1 0.1082
cosine_recall@3 0.2662
cosine_recall@5 0.3636
cosine_recall@10 0.5065
cosine_ndcg@10 0.2839
cosine_mrr@10 0.2156
cosine_map@100 0.2323

Information Retrieval

Metric Value
cosine_accuracy@1 0.1147
cosine_accuracy@3 0.2403
cosine_accuracy@5 0.3398
cosine_accuracy@10 0.4805
cosine_precision@1 0.1147
cosine_precision@3 0.0801
cosine_precision@5 0.068
cosine_precision@10 0.0481
cosine_recall@1 0.1147
cosine_recall@3 0.2403
cosine_recall@5 0.3398
cosine_recall@10 0.4805
cosine_ndcg@10 0.275
cosine_mrr@10 0.212
cosine_map@100 0.2304

Information Retrieval

Metric Value
cosine_accuracy@1 0.1126
cosine_accuracy@3 0.2641
cosine_accuracy@5 0.329
cosine_accuracy@10 0.487
cosine_precision@1 0.1126
cosine_precision@3 0.088
cosine_precision@5 0.0658
cosine_precision@10 0.0487
cosine_recall@1 0.1126
cosine_recall@3 0.2641
cosine_recall@5 0.329
cosine_recall@10 0.487
cosine_ndcg@10 0.2791
cosine_mrr@10 0.2152
cosine_map@100 0.234

Information Retrieval

Metric Value
cosine_accuracy@1 0.1039
cosine_accuracy@3 0.2619
cosine_accuracy@5 0.3355
cosine_accuracy@10 0.474
cosine_precision@1 0.1039
cosine_precision@3 0.0873
cosine_precision@5 0.0671
cosine_precision@10 0.0474
cosine_recall@1 0.1039
cosine_recall@3 0.2619
cosine_recall@5 0.3355
cosine_recall@10 0.474
cosine_ndcg@10 0.27
cosine_mrr@10 0.2071
cosine_map@100 0.2256

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 6,468 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 5 tokens
    • mean: 39.4 tokens
    • max: 168 tokens
    • min: 10 tokens
    • mean: 20.48 tokens
    • max: 44 tokens
  • Samples:
    positive anchor
    Aquest tràmit permet la inscripció al padró dels canvis de domicili dins de Sant Quirze del Vallès... Quin és el benefici de la inscripció al Padró d'Habitants?
    Els recursos que es poden oferir al banc de recursos són: MATERIALS, PROFESSIONALS i SOCIALS. Quins tipus de recursos es poden oferir al banc de recursos?
    El termini per a la presentació de sol·licituds serà del 8 al 21 de maig de 2024, ambdós inclosos. Quin és el termini per a la presentació de sol·licituds per a la preinscripció a l'Escola Bressol Municipal El Patufet?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            1024,
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 5
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.2
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.2
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_1024_cosine_map@100 dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.3951 10 4.4042 - - - - - -
0.7901 20 2.9471 - - - - - -
0.9877 25 - 0.2293 0.2045 0.2099 0.2138 0.1717 0.2242
1.1852 30 2.2351 - - - - - -
1.5802 40 1.5289 - - - - - -
1.9753 50 1.2045 0.2332 0.2182 0.2277 0.2221 0.2051 0.2248
2.3704 60 0.9435 - - - - - -
2.7654 70 0.7958 - - - - - -
2.963 75 - 0.2379 0.2352 0.2276 0.2204 0.2138 0.2235
3.1605 80 0.6703 - - - - - -
3.5556 90 0.6162 - - - - - -
3.9506 100 0.6079 - - - - - -
3.9901 101 - 0.2251 0.2307 0.2201 0.2343 0.2210 0.2348
4.3457 110 0.5085 - - - - - -
4.7407 120 0.5248 - - - - - -
4.9383 125 - 0.2287 0.2340 0.2304 0.2323 0.2256 0.2428
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.35.0.dev0
  • Datasets: 3.0.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
13
Safetensors
Model size
568M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for adriansanz/ST-tramits-SQV-007-5ep

Base model

BAAI/bge-m3
Finetuned
(130)
this model

Collection including adriansanz/ST-tramits-SQV-007-5ep

Evaluation results