Edit model card

SentenceTransformer based on sentence-transformers/msmarco-distilbert-base-v2

This is a sentence-transformers model finetuned from sentence-transformers/msmarco-distilbert-base-v2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("kperkins411/msmarco-distilbert-base-v2_triplet_legal")
# Run inference
sentences = [
    'In what circumstances can FCE assume responsibility for a Program Patent?',
    'Notwithstanding the foregoing, in the event ExxonMobil decides not to prosecute, defend, enforce, maintain or decides to abandon any Program Patent, then ExxonMobil will provide notice thereof to FCE, and FCE will then have the right, but not the obligation, to prosecute or maintain the Program Patent and sole responsibility for the continuing costs, taxes, legal fees, maintenance fees and other fees associated with that Program Patent.',
    '4. Limitation of Liability of the Sponsor. The Sponsor shall not be liable for any error of judgment or mistake of law or for any act or omission in the oversight, administration or management of the Trust or the performance of its duties hereunder, except for willful misfeasance, bad faith or gross negligence in the performance of its duties, or by reason of the reckless disregard of its obligations and duties hereunder. As used in this Section 4, the term "Sponsor" shall include Domini and/or any of its affiliates and the directors, officers and employees of Domini and/or any of its affiliates.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.3953
cosine_accuracy@3 0.5377
cosine_accuracy@5 0.5945
cosine_accuracy@10 0.6736
cosine_precision@1 0.3953
cosine_precision@3 0.1792
cosine_precision@5 0.1189
cosine_precision@10 0.0674
cosine_recall@1 0.3953
cosine_recall@3 0.5377
cosine_recall@5 0.5945
cosine_recall@10 0.6736
cosine_ndcg@10 0.5277
cosine_mrr@10 0.4819
cosine_map@100 0.489
dot_accuracy@1 0.3964
dot_accuracy@3 0.5335
dot_accuracy@5 0.5933
dot_accuracy@10 0.6744
dot_precision@1 0.3964
dot_precision@3 0.1778
dot_precision@5 0.1187
dot_precision@10 0.0674
dot_recall@1 0.3964
dot_recall@3 0.5335
dot_recall@5 0.5933
dot_recall@10 0.6744
dot_ndcg@10 0.5275
dot_mrr@10 0.4815
dot_map@100 0.4885

Training Details

Training Dataset

Unnamed Dataset

  • Size: 88,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 17.42 tokens
    • max: 104 tokens
    • min: 6 tokens
    • mean: 102.85 tokens
    • max: 350 tokens
    • min: 6 tokens
    • mean: 103.73 tokens
    • max: 350 tokens
  • Samples:
    anchor positive negative
    What happens if a Party fails to retain records for the required period? Each Party will retain such records for at least three (3) years following expiration or termination of this Agreement or such longer period as may be required by applicable law or regulation. Either party hereto may terminate this Agreement after the Initial Period upon at least six (6) months' prior written notice to the other party thereof.
    What happens if a Party fails to retain records for the required period? Each Party will retain such records for at least three (3) years following expiration or termination of this Agreement or such longer period as may be required by applicable law or regulation. The Agreement may be terminated by both Parties with a notification period of *** before the end of the Initial Term of the Agreement.
    What happens if a Party fails to retain records for the required period? Each Party will retain such records for at least three (3) years following expiration or termination of this Agreement or such longer period as may be required by applicable law or regulation. For twelve (12) months after delivery of the Master Copy of each Licensed Product to Licensee, Licensor warrants that the media in which the Licensed Products are stored shall be free from defects in materials and workmanship, assuming normal use. Licensee may return any defective media to Licensor for replacement free of charge during such twelve (12) month period.
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 1,084 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 20.24 tokens
    • max: 124 tokens
    • min: 6 tokens
    • mean: 97.01 tokens
    • max: 350 tokens
    • min: 6 tokens
    • mean: 105.03 tokens
    • max: 350 tokens
  • Samples:
    anchor positive negative
    Are Capital Contributions categorized as either 'Initial' or 'Additional' in the accounts? Capital Accounts

    An individual capital account (the "Capital Accounts") will be maintained for each Participant and their Initial Capital Contribution will be credited to this account. Any Additional Capital Contributions made by any Participant will be credited to that Participant's individual Capital Account.
    Section 4.3 Deposits and Payments 19
    Are Capital Contributions categorized as either 'Initial' or 'Additional' in the accounts? Capital Accounts

    An individual capital account (the "Capital Accounts") will be maintained for each Participant and their Initial Capital Contribution will be credited to this account. Any Additional Capital Contributions made by any Participant will be credited to that Participant's individual Capital Account.
    Section 2.1 The Fund agrees at its own expense to execute any and all documents, to furnish any and all information, and to take any other actions that may be reasonably necessary in connection with the qualification of the Shares for sale in those states that Integrity may designate.
    Are Capital Contributions categorized as either 'Initial' or 'Additional' in the accounts? Capital Accounts

    An individual capital account (the "Capital Accounts") will be maintained for each Participant and their Initial Capital Contribution will be credited to this account. Any Additional Capital Contributions made by any Participant will be credited to that Participant's individual Capital Account.
    Section 1.9 Integrity shall prepare and deliver reports to the Treasurer of the Fund and to the Investment Adviser on a regular, at least quarterly, basis, showing the distribution expenses incurred pursuant to this Agreement and the Plan and the purposes therefore, as well as any supplemental reports as the Trustees, from time to time, may reasonably request.
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • learning_rate: 2e-05
  • num_train_epochs: 6
  • warmup_ratio: 0.1
  • fp16: True
  • load_best_model_at_end: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 6
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss msmarco-distilbert-base-v2_cosine_map@100
0 0 - - 0.4145
0.1453 100 1.7626 - -
0.2907 200 0.9595 - -
0.4360 300 0.7263 - -
0.5814 400 0.6187 - -
0.7267 500 0.5571 - -
0.8721 600 0.4885 - -
1.0131 697 - 0.3676 -
1.0044 700 0.4283 - -
1.1497 800 0.3956 - -
1.2951 900 0.2941 - -
1.4404 1000 0.2437 - -
1.5858 1100 0.1988 - -
1.7311 1200 0.185 - -
1.8765 1300 0.1571 - -
2.0131 1394 - 0.2679 -
2.0087 1400 0.1409 - -
2.1541 1500 0.1368 - -
2.2994 1600 0.111 - -
2.4448 1700 0.0994 - -
2.5901 1800 0.0837 - -
2.7355 1900 0.076 - -
2.8808 2000 0.0645 - -
3.0131 2091 - 0.2412 -
3.0131 2100 0.0607 - -
3.1584 2200 0.0609 - -
3.3038 2300 0.0503 - -
3.4491 2400 0.0483 - -
3.5945 2500 0.0402 - -
3.7398 2600 0.0397 - -
3.8852 2700 0.0305 - -
4.0131 2788 - 0.2196 -
4.0174 2800 0.0304 - -
4.1628 2900 0.0307 - -
4.3081 3000 0.0256 - -
4.4535 3100 0.0258 - -
4.5988 3200 0.0212 - -
4.7442 3300 0.0213 - -
4.8895 3400 0.0174 - -
5.0131 3485 - 0.2036 -
5.0218 3500 0.0191 - -
5.1672 3600 0.0198 - -
5.3125 3700 0.0161 - -
5.4578 3800 0.0166 - -
5.6032 3900 0.0135 - -
5.7485 4000 0.0145 - -
5.8939 4100 0.0129 - -
5.9346 4128 - 0.1966 0.489
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.9
  • Sentence Transformers: 3.1.0.dev0
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
17
Safetensors
Model size
66.4M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kperkins411/msmarco-distilbert-base-v2_triplet_legal

Finetuned
(1)
this model

Evaluation results