Edit model card

SentenceTransformer based on Alibaba-NLP/gte-large-en-v1.5

This is a sentence-transformers model finetuned from Alibaba-NLP/gte-large-en-v1.5. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Alibaba-NLP/gte-large-en-v1.5
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("model_3")
# Run inference
sentences = [
    "What was Nathan's response to the initial proposal from Global Air U?",
    "I don't see on the proposal.\nI don't see anything class or the class related.\nUm.\nOh, so for the course.\nNo, no.",
    'And hopefully that should update now in your account in a second.\nYeah.\nIf you give that a go now, you should see all the way to August 2025.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.3279
cosine_accuracy@3 0.4898
cosine_accuracy@5 0.5663
cosine_accuracy@10 0.6613
cosine_accuracy@30 0.767
cosine_accuracy@50 0.8155
cosine_accuracy@100 0.8598
cosine_precision@1 0.3279
cosine_precision@3 0.1902
cosine_precision@5 0.1383
cosine_precision@10 0.0872
cosine_precision@30 0.0384
cosine_precision@50 0.0257
cosine_precision@100 0.0143
cosine_recall@1 0.1988
cosine_recall@3 0.3261
cosine_recall@5 0.391
cosine_recall@10 0.4756
cosine_recall@30 0.6031
cosine_recall@50 0.6602
cosine_recall@100 0.7195
cosine_ndcg@10 0.3785
cosine_mrr@10 0.4295
cosine_map@100 0.3193
dot_accuracy@1 0.329
dot_accuracy@3 0.4887
dot_accuracy@5 0.5717
dot_accuracy@10 0.6634
dot_accuracy@30 0.767
dot_accuracy@50 0.8134
dot_accuracy@100 0.8619
dot_precision@1 0.329
dot_precision@3 0.1899
dot_precision@5 0.1387
dot_precision@10 0.0874
dot_precision@30 0.0385
dot_precision@50 0.0257
dot_precision@100 0.0143
dot_recall@1 0.1994
dot_recall@3 0.3259
dot_recall@5 0.3937
dot_recall@10 0.4771
dot_recall@30 0.6044
dot_recall@50 0.6591
dot_recall@100 0.722
dot_ndcg@10 0.3791
dot_mrr@10 0.4305
dot_map@100 0.3195

Training Details

Training Dataset

Unnamed Dataset

  • Size: 7,005 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 8 tokens
    • mean: 14.59 tokens
    • max: 25 tokens
    • min: 12 tokens
    • mean: 60.98 tokens
    • max: 170 tokens
  • Samples:
    anchor positive
    What progress has been made with setting up Snowflake share? He finally got around to giving me the information necessary to set up Snowflake share.
    I will be submitting the application to get back set up.
    Once the database is set up, then we just need to figure out how to configure Snowflake share, which it's going to be in the documentation.
    We should be set on that end.
    We also are going to have a conversation with someone named Peter Tsanghen, who's, who owns Jira platform.
    Great.
    Who is Peter Tsanghen and what is the planned interaction with him? He finally got around to giving me the information necessary to set up Snowflake share.
    I will be submitting the application to get back set up.
    Once the database is set up, then we just need to figure out how to configure Snowflake share, which it's going to be in the documentation.
    We should be set on that end.
    We also are going to have a conversation with someone named Peter Tsanghen, who's, who owns Jira platform.
    Great.
    Who is Peter Tsanghen and what is the planned interaction with him? Uh, and so now we just have to meet with Peter.
    Peter is someone who I used to work with on, he used to work on, uh, syndicated data products.
    So I used to work with him on that.
  • Loss: main.MultipleNegativesRankingLoss_with_logging

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 4
  • per_device_eval_batch_size: 4
  • num_train_epochs: 2
  • max_steps: 1751
  • disable_tqdm: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • prediction_loss_only: True
  • per_device_train_batch_size: 4
  • per_device_eval_batch_size: 4
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 2
  • max_steps: 1751
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: True
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_map@100
0.0114 20 0.2538
0.0228 40 0.2601
0.0342 60 0.2724
0.0457 80 0.2911
0.0571 100 0.2976
0.0685 120 0.3075
0.0799 140 0.3071
0.0913 160 0.3111
0.1027 180 0.3193

Framework Versions

  • Python: 3.10.9
  • Sentence Transformers: 3.0.1
  • Transformers: 4.39.3
  • PyTorch: 2.3.1+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.20.0
  • Tokenizers: 0.15.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
5
Safetensors
Model size
434M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ganeshanmalhotra007/model_3

Finetuned
(5)
this model

Evaluation results