alpcansoydas's picture
Add new SentenceTransformer model
16733bc verified
metadata
base_model: sentence-transformers/all-mpnet-base-v2
library_name: sentence-transformers
metrics:
  - pearson_cosine
  - spearman_cosine
  - pearson_manhattan
  - spearman_manhattan
  - pearson_euclidean
  - spearman_euclidean
  - pearson_dot
  - spearman_dot
  - pearson_max
  - spearman_max
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:23863
  - loss:MultipleNegativesRankingLoss
widget:
  - source_sentence: >-
      SFP+ 10GBase-SR 10 Gigabit Ethernet Optics, 850nm for up to 300m
      transmission on MMF
    sentences:
      - Software
      - Data Voice or Multimedia Network Equipment or Platforms and Accessories
      - >-
        Components for information technology or broadcasting or
        telecommunications
  - source_sentence: >-
      Apple Macbook Pro Retina 15.4 inch Intel Core i7 2.5GHz 16GB 512GB SSD
      MJLT2TU/A
    sentences:
      - Consumer electronics
      - Office supply
      - Computer Equipment and Accessories
  - source_sentence: >-
      Switch and Route Processing Unit A5(Including 1*2G Memory and 1*1G CF
      Card)
    sentences:
      - Data Voice or Multimedia Network Equipment or Platforms and Accessories
      - >-
        Components for information technology or broadcasting or
        telecommunications
      - Consumer electronics
  - source_sentence: Samsung Gear VR R325
    sentences:
      - Computer Equipment and Accessories
      - Data Voice or Multimedia Network Equipment or Platforms and Accessories
      - Communications Devices and Accessories
  - source_sentence: >-
      SUN.Sun Fire T1000 Server, 6 core, 1.0GHz UltraSPARC T1 processor, 4GB
      DDR2 memory (4 * 1GB DIMMs), 160 SATA hard disk drive.
    sentences:
      - Computer Equipment and Accessories
      - Communications Devices and Accessories
      - Domestic appliances
model-index:
  - name: SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
    results:
      - task:
          type: semantic-similarity
          name: Semantic Similarity
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: pearson_cosine
            value: .nan
            name: Pearson Cosine
          - type: spearman_cosine
            value: .nan
            name: Spearman Cosine
          - type: pearson_manhattan
            value: .nan
            name: Pearson Manhattan
          - type: spearman_manhattan
            value: .nan
            name: Spearman Manhattan
          - type: pearson_euclidean
            value: .nan
            name: Pearson Euclidean
          - type: spearman_euclidean
            value: .nan
            name: Spearman Euclidean
          - type: pearson_dot
            value: .nan
            name: Pearson Dot
          - type: spearman_dot
            value: .nan
            name: Spearman Dot
          - type: pearson_max
            value: .nan
            name: Pearson Max
          - type: spearman_max
            value: .nan
            name: Spearman Max

SentenceTransformer based on sentence-transformers/all-mpnet-base-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-mpnet-base-v2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-mpnet-base-v2
  • Maximum Sequence Length: 384 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("alpcansoydas/product-model-17.10.24-ifhavemorethan100sampleperfamily")
# Run inference
sentences = [
    'SUN.Sun Fire T1000 Server, 6 core, 1.0GHz UltraSPARC T1 processor, 4GB DDR2 memory (4 * 1GB DIMMs), 160 SATA hard disk drive.',
    'Computer Equipment and Accessories',
    'Communications Devices and Accessories',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine nan
spearman_cosine nan
pearson_manhattan nan
spearman_manhattan nan
pearson_euclidean nan
spearman_euclidean nan
pearson_dot nan
spearman_dot nan
pearson_max nan
spearman_max nan

Training Details

Training Dataset

Unnamed Dataset

  • Size: 23,863 training samples
  • Columns: sentence1 and sentence2
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2
    type string string
    details
    • min: 3 tokens
    • mean: 16.7 tokens
    • max: 78 tokens
    • min: 3 tokens
    • mean: 7.97 tokens
    • max: 12 tokens
  • Samples:
    sentence1 sentence2
    High_Performance_DB_HPE ProLiant DL380 Gen10 8SFF Computer Equipment and Accessories
    HP PROLIANT DL160 G7 SERVER Computer Equipment and Accessories
    ZTE 24-port GE SFP Physical Line Interface Unit Z Components for information technology or broadcasting or telecommunications
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 5,114 evaluation samples
  • Columns: sentence1 and sentence2
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2
    type string string
    details
    • min: 3 tokens
    • mean: 17.25 tokens
    • max: 93 tokens
    • min: 3 tokens
    • mean: 7.83 tokens
    • max: 12 tokens
  • Samples:
    sentence1 sentence2
    Symantec Security Analytics Computer Equipment and Accessories
    RAU2 X 7/A28 HP Kit HIGH Data Voice or Multimedia Network Equipment or Platforms and Accessories
    HPE DL360 Gen9 8SFF CTO Server Computer Equipment and Accessories
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 2
  • warmup_ratio: 0.1
  • fp16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss spearman_max
0.0670 100 2.2597 1.9744 nan
0.1340 200 1.9663 1.8451 nan
0.2011 300 1.9035 1.8232 nan
0.2681 400 1.8447 1.7664 nan
0.3351 500 1.7951 1.7387 nan
0.4021 600 1.7409 1.7485 nan
0.4692 700 1.7049 1.7022 nan
0.5362 800 1.7058 1.6885 nan
0.6032 900 1.6933 1.6730 nan
0.6702 1000 1.7053 1.6562 nan
0.7373 1100 1.6289 1.6613 nan
0.8043 1200 1.6046 1.6571 nan
0.8713 1300 1.6332 1.6420 nan
0.9383 1400 1.6431 1.6107 nan
1.0054 1500 1.6104 1.6309 nan
1.0724 1600 1.5444 1.6234 nan
1.1394 1700 1.4944 1.6043 nan
1.2064 1800 1.5099 1.6083 nan
1.2735 1900 1.4763 1.6369 nan
1.3405 2000 1.5351 1.5959 nan
1.4075 2100 1.4537 1.6378 nan
1.4745 2200 1.5263 1.5769 nan
1.5416 2300 1.46 1.5889 nan
1.6086 2400 1.4781 1.5744 nan
1.6756 2500 1.4932 1.5663 nan
1.7426 2600 1.4158 1.5585 nan
1.8097 2700 1.4571 1.5580 nan
1.8767 2800 1.4078 1.5627 nan
1.9437 2900 1.4205 1.5622 nan

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.2.0
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.34.2
  • Datasets: 3.0.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}