joshuapb's picture
Add new SentenceTransformer model.
6a810b7 verified
metadata
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
  - en
library_name: sentence-transformers
license: apache-2.0
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:100
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
widget:
  - source_sentence: >-
      Fig. 8. The accuracy of instruct-GPT series models of different sizes
      (left to right, small to large). Larger model doing better on binary
      classification of answerable and unanswerable questions in SelfAware eval.
      (Image source: Yin et al. 2023)

      Another way to assess the model’s awareness of unknown knowledge is to
      measure the model’s output uncertainty. When a question is in-between
      known and unknown, the model is expected to demonstrate the right level of
      confidence.

      The experiment by Kadavath et al. (2022) showed that LLMs are shown to be
      well calibrated in their estimation probabilities of answer correctness on
      diverse multiple choice questions in a format with visible lettered answer
      options (MMLU, TruthfulQA, QuALITY, LogiQA), meaning that the predicted
      probability coincides with the frequency of that answer being true. RLHF
      fine-tuning makes the model poorly calibrated, but higher sampling
      temperature leads to better calibration results.
    sentences:
      - >-
        What effect does the slower acquisition of new knowledge compared to
        established knowledge have on the effectiveness of large language models
        in practical scenarios?
      - >-
        How do discrepancies identified during the final output review phase
        affect the overall quality of the generated responses?
      - >-
        What effect does reinforcement learning from human feedback (RLHF)
        fine-tuning have on how well large language models assess the accuracy
        of their answers?
  - source_sentence: >-
      Fig. 1. Knowledge categorization of close-book QA examples based on how
      likely the model outputs correct answers. (Image source: Gekhman et al.
      2024)

      Some interesting observations of the experiments, where dev set accuracy
      is considered a proxy for hallucinations.


      Unknown examples are fitted substantially slower than Known.

      The best dev performance is obtained when the LLM fits the majority of the
      Known training examples but only a few of the Unknown ones. The model
      starts to hallucinate when it learns most of the Unknown examples.

      Among Known examples, MaybeKnown cases result in better overall
      performance, more essential than HighlyKnown ones.
    sentences:
      - >-
        What is the relationship between the structural formatting of inquiries
        and the occurrence of calibration errors in artificial intelligence
        models, and in what ways can this understanding contribute to the
        optimization of model training processes?
      - >-
        What are the benefits of integrating a pretrained Natural Language
        Inference (NLI) model with MPNet when assessing the reliability of
        reasoning paths in knowledge retrieval?
      - >-
        In what ways do the classifications of Known versus Unknown examples
        influence the propensity of AI models to generate hallucinations during
        their training processes?
  - source_sentence: >-
      Fig. 3. The evaluation framework for the FactualityPrompt benchmark.(Image
      source: Lee, et al. 2022)

      Given the model continuation and paired Wikipedia text, two evaluation
      metrics for hallucination are considered:


      Hallucination NE (Named Entity) errors: Using a pretrained entity
      detection model and document-level grounding, this metric measures the
      fraction of detected named entities that do not appear in the ground truth
      document.

      Entailment ratios: Using a RoBERTa model fine-tuned on MNLI and
      sentence-level knowledge grounding, this metric calculates the fraction of
      generated sentences that are marked as relevant to the paired Wikipedia
      sentence by the entailment model.
    sentences:
      - >-
        What impact does the implementation of a pretrained query-document
        relevance model have on the process of document selection in research
        methodologies?
      - >-
        In what ways does the sequence in which information is delivered in
        AI-generated responses influence the likelihood of generating
        inaccuracies or hallucinations?
      - >-
        In what ways does the FactualityPrompt benchmark assess the performance
        of named entity detection models, particularly in relation to errors
        arising from hallucinated named entities?
  - source_sentence: >-
      Fig. 1. Knowledge categorization of close-book QA examples based on how
      likely the model outputs correct answers. (Image source: Gekhman et al.
      2024)

      Some interesting observations of the experiments, where dev set accuracy
      is considered a proxy for hallucinations.


      Unknown examples are fitted substantially slower than Known.

      The best dev performance is obtained when the LLM fits the majority of the
      Known training examples but only a few of the Unknown ones. The model
      starts to hallucinate when it learns most of the Unknown examples.

      Among Known examples, MaybeKnown cases result in better overall
      performance, more essential than HighlyKnown ones.
    sentences:
      - >-
        In what ways does the inherently adversarial structure of TruthfulQA
        inquiries facilitate the detection of prevalent fallacies in human
        cognitive processes, and what implications does this have for
        understanding the constraints of expansive language models?
      - >-
        In what ways do MaybeKnown cases influence the performance of a model
        when contrasted with HighlyKnown examples, particularly in relation to
        the occurrence of hallucinations?
      - >-
        In what ways does the Self-RAG framework leverage reflection tokens to
        enhance the quality of its generated outputs, and what implications does
        this have for the overall generation process?
  - source_sentence: >-
      Fine-tuning New Knowledge#

      Fine-tuning a pre-trained LLM via supervised fine-tuning and RLHF is a
      common technique for improving certain capabilities of the model like
      instruction following. Introducing new knowledge at the fine-tuning stage
      is hard to avoid.

      Fine-tuning usually consumes much less compute, making it debatable
      whether the model can reliably learn new knowledge via small-scale
      fine-tuning. Gekhman et al. 2024 studied the research question of whether
      fine-tuning LLMs on new knowledge encourages hallucinations. They found
      that (1) LLMs learn fine-tuning examples with new knowledge slower than
      other examples with knowledge consistent with the pre-existing knowledge
      of the model; (2) Once the examples with new knowledge are eventually
      learned, they increase the model’s tendency to hallucinate.
    sentences:
      - >-
        How does the IsRel token function in the retrieval process, and what
        impact does it have on the relevance of generated content to reduce
        hallucination?
      - >-
        What is the relationship between the calibration of AI models and the
        effectiveness of verbalized probabilities when applied to tasks of
        varying difficulty levels?
      - >-
        How do the results presented by Gekhman et al. in their 2024 study
        inform our understanding of the reliability metrics associated with
        large language models (LLMs) when subjected to fine-tuning with novel
        datasets?
model-index:
  - name: BGE base Financial Matryoshka
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 768
          type: dim_768
        metrics:
          - type: cosine_accuracy@1
            value: 0.828125
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.9635416666666666
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.9739583333333334
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9947916666666666
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.828125
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3211805555555556
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.1947916666666666
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09947916666666667
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.828125
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.9635416666666666
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.9739583333333334
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9947916666666666
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9220150687007592
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.8976707175925925
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.8981047453703703
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 512
          type: dim_512
        metrics:
          - type: cosine_accuracy@1
            value: 0.8020833333333334
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.9635416666666666
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.9739583333333334
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9895833333333334
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.8020833333333334
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3211805555555556
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.1947916666666666
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09895833333333333
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.8020833333333334
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.9635416666666666
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.9739583333333334
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9895833333333334
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9077325270335209
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.880220734126984
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.8810414411976911
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 256
          type: dim_256
        metrics:
          - type: cosine_accuracy@1
            value: 0.796875
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.9583333333333334
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.96875
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9791666666666666
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.796875
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3194444444444445
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.19374999999999998
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09791666666666665
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.796875
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.9583333333333334
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.96875
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9791666666666666
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9011377823848584
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.8746155753968253
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.8757564484126984
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 128
          type: dim_128
        metrics:
          - type: cosine_accuracy@1
            value: 0.7864583333333334
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.9322916666666666
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.9635416666666666
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9635416666666666
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.7864583333333334
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3107638888888889
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.19270833333333334
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09635416666666667
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.7864583333333334
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.9322916666666666
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.9635416666666666
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9635416666666666
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.888061438431803
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.8623263888888889
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.8647421480429293
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 64
          type: dim_64
        metrics:
          - type: cosine_accuracy@1
            value: 0.6875
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8645833333333334
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.9270833333333334
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.96875
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.6875
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2881944444444445
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.18541666666666665
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09687499999999999
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.6875
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8645833333333334
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.9270833333333334
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.96875
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.8335872598831777
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7895895337301586
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7917890681938919
            name: Cosine Map@100

BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("joshuapb/fine-tuned-matryoshka-100")
# Run inference
sentences = [
    'Fine-tuning New Knowledge#\nFine-tuning a pre-trained LLM via supervised fine-tuning and RLHF is a common technique for improving certain capabilities of the model like instruction following. Introducing new knowledge at the fine-tuning stage is hard to avoid.\nFine-tuning usually consumes much less compute, making it debatable whether the model can reliably learn new knowledge via small-scale fine-tuning. Gekhman et al. 2024 studied the research question of whether fine-tuning LLMs on new knowledge encourages hallucinations. They found that (1) LLMs learn fine-tuning examples with new knowledge slower than other examples with knowledge consistent with the pre-existing knowledge of the model; (2) Once the examples with new knowledge are eventually learned, they increase the model’s tendency to hallucinate.',
    'How do the results presented by Gekhman et al. in their 2024 study inform our understanding of the reliability metrics associated with large language models (LLMs) when subjected to fine-tuning with novel datasets?',
    'What is the relationship between the calibration of AI models and the effectiveness of verbalized probabilities when applied to tasks of varying difficulty levels?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.8281
cosine_accuracy@3 0.9635
cosine_accuracy@5 0.974
cosine_accuracy@10 0.9948
cosine_precision@1 0.8281
cosine_precision@3 0.3212
cosine_precision@5 0.1948
cosine_precision@10 0.0995
cosine_recall@1 0.8281
cosine_recall@3 0.9635
cosine_recall@5 0.974
cosine_recall@10 0.9948
cosine_ndcg@10 0.922
cosine_mrr@10 0.8977
cosine_map@100 0.8981

Information Retrieval

Metric Value
cosine_accuracy@1 0.8021
cosine_accuracy@3 0.9635
cosine_accuracy@5 0.974
cosine_accuracy@10 0.9896
cosine_precision@1 0.8021
cosine_precision@3 0.3212
cosine_precision@5 0.1948
cosine_precision@10 0.099
cosine_recall@1 0.8021
cosine_recall@3 0.9635
cosine_recall@5 0.974
cosine_recall@10 0.9896
cosine_ndcg@10 0.9077
cosine_mrr@10 0.8802
cosine_map@100 0.881

Information Retrieval

Metric Value
cosine_accuracy@1 0.7969
cosine_accuracy@3 0.9583
cosine_accuracy@5 0.9688
cosine_accuracy@10 0.9792
cosine_precision@1 0.7969
cosine_precision@3 0.3194
cosine_precision@5 0.1937
cosine_precision@10 0.0979
cosine_recall@1 0.7969
cosine_recall@3 0.9583
cosine_recall@5 0.9688
cosine_recall@10 0.9792
cosine_ndcg@10 0.9011
cosine_mrr@10 0.8746
cosine_map@100 0.8758

Information Retrieval

Metric Value
cosine_accuracy@1 0.7865
cosine_accuracy@3 0.9323
cosine_accuracy@5 0.9635
cosine_accuracy@10 0.9635
cosine_precision@1 0.7865
cosine_precision@3 0.3108
cosine_precision@5 0.1927
cosine_precision@10 0.0964
cosine_recall@1 0.7865
cosine_recall@3 0.9323
cosine_recall@5 0.9635
cosine_recall@10 0.9635
cosine_ndcg@10 0.8881
cosine_mrr@10 0.8623
cosine_map@100 0.8647

Information Retrieval

Metric Value
cosine_accuracy@1 0.6875
cosine_accuracy@3 0.8646
cosine_accuracy@5 0.9271
cosine_accuracy@10 0.9688
cosine_precision@1 0.6875
cosine_precision@3 0.2882
cosine_precision@5 0.1854
cosine_precision@10 0.0969
cosine_recall@1 0.6875
cosine_recall@3 0.8646
cosine_recall@5 0.9271
cosine_recall@10 0.9688
cosine_ndcg@10 0.8336
cosine_mrr@10 0.7896
cosine_map@100 0.7918

Training Details

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • num_train_epochs: 5
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 8
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.3846 5 5.0472 - - - - -
0.7692 10 4.0023 - - - - -
1.0 13 - 0.7939 0.8135 0.8282 0.7207 0.8323
1.1538 15 2.3381 - - - - -
1.5385 20 3.4302 - - - - -
1.9231 25 2.08 - - - - -
2.0 26 - 0.8494 0.8681 0.8781 0.7959 0.8888
2.3077 30 1.4696 - - - - -
2.6923 35 1.8153 - - - - -
3.0 39 - 0.8641 0.8844 0.8924 0.7952 0.8997
3.0769 40 1.3498 - - - - -
3.4615 45 0.9135 - - - - -
3.8462 50 1.3996 - - - - -
4.0 52 - 0.8647 0.8775 0.8819 0.7896 0.8990
4.2308 55 1.1582 - - - - -
4.6154 60 1.2233 - - - - -
5.0 65 0.9757 0.8647 0.8758 0.8810 0.7918 0.8981
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.42.4
  • PyTorch: 2.3.1+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}