SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("MugheesAwan11/bge-base-citi-dataset-detailed-6k-0_5k-e2")
# Run inference
sentences = [
' and Arc Design is a registered service mark of Citigroup Inc. OpenInvestor is a service mark of Citigroup Inc. 1044398 GTS74053 0113 Trade Working Capital Viewpoints Navigating global uncertainty: Perspectives on supporting the healthcare supply chain November 2023 Treasury and Trade Solutions Foreword Foreword Since the inception of the COVID-19 pandemic, the healthcare industry has faced supply chain disruptions. The industry, which has a long tradition in innovation, continues to transform to meet the needs of an evolving environment. Pauline kXXXXX Unlocking the full potential within the healthcare industry Global Head, Trade requires continuous investment. As corporates plan for the Working Capital Advisory future, careful working capital management is essential to ensuring they get there. Andrew Betts Global head of TTS Trade Sales Client Management, Citi Bayo Gbowu Global Sector Lead, Trade Healthcare and Wellness Ian Kervick-Jimenez Trade Working Capital Advisory 2 Treasury and Trade Solutions The Working',
'What are the registered service marks of Citigroup Inc?',
'What is the role of DXX jXXXX US Real Estate Total Return SM Index in determining, composing or calculating products?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Dataset:
dim_768
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.4942 |
cosine_accuracy@3 | 0.6768 |
cosine_accuracy@5 | 0.7478 |
cosine_accuracy@10 | 0.8333 |
cosine_precision@1 | 0.4942 |
cosine_precision@3 | 0.2256 |
cosine_precision@5 | 0.1496 |
cosine_precision@10 | 0.0833 |
cosine_recall@1 | 0.4942 |
cosine_recall@3 | 0.6768 |
cosine_recall@5 | 0.7478 |
cosine_recall@10 | 0.8333 |
cosine_ndcg@10 | 0.6585 |
cosine_ndcg@100 | 0.6901 |
cosine_mrr@10 | 0.6032 |
cosine_map@100 | 0.6096 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 6,201 training samples
- Columns:
positive
andanchor
- Approximate statistics based on the first 1000 samples:
positive anchor type string string details - min: 146 tokens
- mean: 205.96 tokens
- max: 289 tokens
- min: 8 tokens
- mean: 26.75 tokens
- max: 241 tokens
- Samples:
positive anchor combined balances do not include: balances in delinquent accounts; balances that exceed your approved credit When Deposits Are Credited to an Account limit for any line of credit or credit card; or outstanding balances Deposits received before the end of a Business Day will be credited to your account that day. However, there been established for the Citigold Account Package. Your may be a delay before these funds are available for your use. See combined monthly balance range will be determined by computing the Funds Availability at Citibank section of this Marketplace an average of your monthly balances for your linked accounts Addendum for more information. during the prior calendar month. Monthly service fees are applied only to accounts with a combined average monthly balance range under the specified limits starting two statement cycles after account opening. Service fees assessed will appear as a charge on your next statement. 2 3 Combined Average Monthly Non- Per Special Circumstances Monthly Balance Service Citibank Check If a checking account is converted
What are the conditions for balances to be included in the combined balances?
the first six months, your credit score may not be where you want it just yet. There are other factors that impact your credit score including the length of your credit file, your credit mix and your credit utilization. If youre hoping to repair a credit score that has been damaged by financial setbacks, the timelines can be longer. A year or two with regular, timely payments and good credit utilization can push your credit score up. However, bankruptcies, collections, and late payments can linger on your credit report for anywhere from seven to ten years. That said, you may not have to use a secured credit card throughout your entire credit building process. Your goal may be to repair your credit to the point where your credit score is good enough to make you eligible for an unsecured credit card. To that end, youll need to research factors such as any fees that apply to the unsecured credit cards youre considering. There is no quick fix to having a great credit score. Building good credit with a
What factors impact your credit score including the length of your credit file, your credit mix, and your credit utilization?
by the index sponsor of the Constituents when it calculated the hypothetical back-tested index levels for the Constituents. It is impossible to predict whether the Index will rise or fall. The actual future performance of the Index may bear no relation to the historical or hypothetical back-tested levels of the Index. The Index Administrator, which is our Affiliate, and the Index Calculation Agent May Exercise Judgments under Certain Circumstances in the Calculation of the Index. Although the Index is rules- based, there are certain circumstances under which the Index Administrator or Index Calculation Agent may be required to exercise judgment in calculating the Index, including the following: The Index Administrator will determine whether an ambiguity, error or omission has arisen and the Index Administrator may resolve such ambiguity, error or omission, acting in good faith and in a commercially reasonable manner, and may amend the Index Rules to reflect the resolution of the ambiguity, error or omission in a manner that is consistent with the commercial objective of the Index. The Index Calculation Agents calculations
What circumstances may require the Index Administrator or Index Calculation Agent to exercise judgment in calculating the Index?
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768 ], "matryoshka_weights": [ 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 32per_device_eval_batch_size
: 16learning_rate
: 2e-05num_train_epochs
: 2lr_scheduler_type
: cosinewarmup_ratio
: 0.1bf16
: Truetf32
: Trueload_best_model_at_end
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 2max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Truelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | dim_768_cosine_map@100 |
---|---|---|---|
0.0515 | 10 | 0.7623 | - |
0.1031 | 20 | 0.6475 | - |
0.1546 | 30 | 0.4492 | - |
0.2062 | 40 | 0.3238 | - |
0.2577 | 50 | 0.2331 | - |
0.3093 | 60 | 0.2575 | - |
0.3608 | 70 | 0.3619 | - |
0.4124 | 80 | 0.1539 | - |
0.4639 | 90 | 0.1937 | - |
0.5155 | 100 | 0.241 | - |
0.5670 | 110 | 0.2192 | - |
0.6186 | 120 | 0.2553 | - |
0.6701 | 130 | 0.2438 | - |
0.7216 | 140 | 0.1916 | - |
0.7732 | 150 | 0.189 | - |
0.8247 | 160 | 0.1721 | - |
0.8763 | 170 | 0.2353 | - |
0.9278 | 180 | 0.1713 | - |
0.9794 | 190 | 0.2121 | - |
1.0 | 194 | - | 0.6100 |
1.0309 | 200 | 0.1394 | - |
1.0825 | 210 | 0.156 | - |
1.1340 | 220 | 0.1276 | - |
1.1856 | 230 | 0.0969 | - |
1.2371 | 240 | 0.0811 | - |
1.2887 | 250 | 0.0699 | - |
1.3402 | 260 | 0.0924 | - |
1.3918 | 270 | 0.0838 | - |
1.4433 | 280 | 0.064 | - |
1.4948 | 290 | 0.0624 | - |
1.5464 | 300 | 0.0837 | - |
1.5979 | 310 | 0.0881 | - |
1.6495 | 320 | 0.1065 | - |
1.7010 | 330 | 0.0646 | - |
1.7526 | 340 | 0.084 | - |
1.8041 | 350 | 0.0697 | - |
1.8557 | 360 | 0.0888 | - |
1.9072 | 370 | 0.0873 | - |
1.9588 | 380 | 0.0755 | - |
2.0 | 388 | - | 0.6096 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 9
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for MugheesAwan11/bge-base-citi-dataset-detailed-6k-0_5k-e2
Base model
BAAI/bge-base-en-v1.5Evaluation results
- Cosine Accuracy@1 on dim 768self-reported0.494
- Cosine Accuracy@3 on dim 768self-reported0.677
- Cosine Accuracy@5 on dim 768self-reported0.748
- Cosine Accuracy@10 on dim 768self-reported0.833
- Cosine Precision@1 on dim 768self-reported0.494
- Cosine Precision@3 on dim 768self-reported0.226
- Cosine Precision@5 on dim 768self-reported0.150
- Cosine Precision@10 on dim 768self-reported0.083
- Cosine Recall@1 on dim 768self-reported0.494
- Cosine Recall@3 on dim 768self-reported0.677