Edit model card

BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Naruke/bge-base-financial-matryoshka")
# Run inference
sentences = [
    'As part of our solar energy system and energy storage contracts, we may provide the customer with performance guarantees that commit that the underlying system will meet or exceed the minimum energy generation or performance requirements specified in the contract.',
    'What types of guarantees does Tesla provide to its solar and energy storage customers?',
    'How many full-time employees did Microsoft report as of June 30, 2023?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.71
cosine_accuracy@3 0.84
cosine_accuracy@5 0.8686
cosine_accuracy@10 0.9143
cosine_precision@1 0.71
cosine_precision@3 0.28
cosine_precision@5 0.1737
cosine_precision@10 0.0914
cosine_recall@1 0.71
cosine_recall@3 0.84
cosine_recall@5 0.8686
cosine_recall@10 0.9143
cosine_ndcg@10 0.8125
cosine_mrr@10 0.7798
cosine_map@100 0.7826

Information Retrieval

Metric Value
cosine_accuracy@1 0.7043
cosine_accuracy@3 0.8357
cosine_accuracy@5 0.8657
cosine_accuracy@10 0.9114
cosine_precision@1 0.7043
cosine_precision@3 0.2786
cosine_precision@5 0.1731
cosine_precision@10 0.0911
cosine_recall@1 0.7043
cosine_recall@3 0.8357
cosine_recall@5 0.8657
cosine_recall@10 0.9114
cosine_ndcg@10 0.8078
cosine_mrr@10 0.7745
cosine_map@100 0.7776

Information Retrieval

Metric Value
cosine_accuracy@1 0.7029
cosine_accuracy@3 0.8229
cosine_accuracy@5 0.8586
cosine_accuracy@10 0.8971
cosine_precision@1 0.7029
cosine_precision@3 0.2743
cosine_precision@5 0.1717
cosine_precision@10 0.0897
cosine_recall@1 0.7029
cosine_recall@3 0.8229
cosine_recall@5 0.8586
cosine_recall@10 0.8971
cosine_ndcg@10 0.8004
cosine_mrr@10 0.7693
cosine_map@100 0.7733

Information Retrieval

Metric Value
cosine_accuracy@1 0.6771
cosine_accuracy@3 0.8143
cosine_accuracy@5 0.8543
cosine_accuracy@10 0.8971
cosine_precision@1 0.6771
cosine_precision@3 0.2714
cosine_precision@5 0.1709
cosine_precision@10 0.0897
cosine_recall@1 0.6771
cosine_recall@3 0.8143
cosine_recall@5 0.8543
cosine_recall@10 0.8971
cosine_ndcg@10 0.7887
cosine_mrr@10 0.7538
cosine_map@100 0.7573

Information Retrieval

Metric Value
cosine_accuracy@1 0.6643
cosine_accuracy@3 0.7814
cosine_accuracy@5 0.8129
cosine_accuracy@10 0.86
cosine_precision@1 0.6643
cosine_precision@3 0.2605
cosine_precision@5 0.1626
cosine_precision@10 0.086
cosine_recall@1 0.6643
cosine_recall@3 0.7814
cosine_recall@5 0.8129
cosine_recall@10 0.86
cosine_ndcg@10 0.76
cosine_mrr@10 0.7283
cosine_map@100 0.7331

Training Details

Training Dataset

Unnamed Dataset

  • Size: 6,300 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 9 tokens
    • mean: 45.57 tokens
    • max: 289 tokens
    • min: 9 tokens
    • mean: 20.32 tokens
    • max: 51 tokens
  • Samples:
    positive anchor
    The detailed information about commitments and contingencies related to legal proceedings is included under Note 13 in Part II, Item 8 of the Annual Report. Where can detailed information about the commitments and contingencies related to legal proceedings be found in the Annual Report on Form 10-K?
    American Express's decision to reinvest gains into its business will depend on regulatory and other approvals, consultation requirements, the execution of ancillary agreements, the cost and availability of financing for the purchaser to fund the transaction and the potential loss of key customers, vendors and other business partners and management’s decisions regarding future operations, strategies and business initiatives. What factors influence American Express's decision to reinvest gains into its business?
    Lease obligations as of June 30, 2023, related to office space and various facilities totaled $883.1 million, with lease terms ranging from one to 21 years and are mostly renewable. How much were lease obligations related to office space and other facilities as of June 30, 2023, and what were the terms?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 2
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.4061 10 0.9835 - - - - -
0.8122 20 0.4319 - - - - -
0.9746 24 - 0.7541 0.7729 0.7738 0.7242 0.7786
1.2183 30 0.3599 - - - - -
1.6244 40 0.2596 - - - - -
1.9492 48 - 0.7573 0.7733 0.7776 0.7331 0.7826
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.3.0+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
8
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Naruke/bge-base-financial-matryoshka

Finetuned
(258)
this model

Evaluation results