BGE base Financial Matryoshka
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("akashmaggon/bge-base-financial-matryoshka-finetuning-tcz-1")
# Run inference
sentences = [
'How is an Enterprise CMS different from a headless CMS?',
'Discover the right CMS for your Business Requirements\nHeadless CMS\nThey separate the backend content repository from the frontend presentation layer, allowing content to be delivered to any device or platform via APIs offering flexibility and scalability.\n\n\nEnterprise CMS\nECMSs are more comprehensive systems designed to manage all types of content within an organization, including documents, images, videos, and other digital assets.',
'How do I figure out how much your services will cost?\nDetermining the cost of our services is best achieved through a 15-30 minute discovery call, where we can understand your unique requirements. Following that, we will provide a transparent and detailed price within 24-48 hours tailored specifically to you',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Datasets:
dim_768
,dim_512
,dim_256
,dim_128
anddim_64
- Evaluated with
InformationRetrievalEvaluator
Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
---|---|---|---|---|---|
cosine_accuracy@1 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
cosine_accuracy@3 | 0.0784 | 0.0784 | 0.0686 | 0.0588 | 0.0294 |
cosine_accuracy@5 | 0.402 | 0.402 | 0.3922 | 0.3137 | 0.2843 |
cosine_accuracy@10 | 0.5196 | 0.5196 | 0.5098 | 0.4902 | 0.4118 |
cosine_precision@1 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
cosine_precision@3 | 0.0261 | 0.0261 | 0.0229 | 0.0196 | 0.0098 |
cosine_precision@5 | 0.0804 | 0.0804 | 0.0784 | 0.0627 | 0.0569 |
cosine_precision@10 | 0.052 | 0.052 | 0.051 | 0.049 | 0.0412 |
cosine_recall@1 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
cosine_recall@3 | 0.0784 | 0.0784 | 0.0686 | 0.0588 | 0.0294 |
cosine_recall@5 | 0.402 | 0.402 | 0.3922 | 0.3137 | 0.2843 |
cosine_recall@10 | 0.5196 | 0.5196 | 0.5098 | 0.4902 | 0.4118 |
cosine_ndcg@10 | 0.2068 | 0.2059 | 0.202 | 0.1866 | 0.157 |
cosine_mrr@10 | 0.1119 | 0.1109 | 0.1089 | 0.0967 | 0.081 |
cosine_map@100 | 0.127 | 0.125 | 0.1212 | 0.1101 | 0.093 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 408 training samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 408 samples:
anchor positive type string string details - min: 8 tokens
- mean: 12.63 tokens
- max: 21 tokens
- min: 14 tokens
- mean: 94.18 tokens
- max: 270 tokens
- Samples:
anchor positive What's it like working at Techchefz?
Join one of the most resourceful tech teams
Discover your future with us. Explore opportunities, values, and culture. Join a dynamic and innovative team at Techchefz.
LIFE AT TECHCHEFZ
Make an Impact from Day One.
We believe in the power of collaboration to create, innovate, and develop groundbreaking solutions. Our teams work closely with clients and partners to co-create solutions that drive innovation and business growth.
Your new journey awaits!How can I contact TechChefz if I'm in the US?
TechChefz Digital has established its presence in two countries, showcasing its global reach and influence. The company’s headquarters is strategically located in Noida, India, serving as the central hub for its operations and leadership. In addition to the headquarters, TechChefz Digital has expanded its footprint with offices in Delaware, United States, allowing the company to cater to the North American market with ease and efficiency.
What results can I expect from your services?
We offer custom software development, digital marketing strategies, and tailored solutions to drive tangible results for your business. Our expert team combines technical prowess with industry insights to propel your business forward in the digital landscape.
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 16per_device_eval_batch_size
: 16gradient_accumulation_steps
: 16learning_rate
: 2e-05num_train_epochs
: 4lr_scheduler_type
: cosinewarmup_ratio
: 0.1fp16
: Trueload_best_model_at_end
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 16eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 4max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
---|---|---|---|---|---|---|
0.6154 | 1 | 0.2038 | 0.1993 | 0.1953 | 0.1764 | 0.1595 |
1.6154 | 2 | 0.2038 | 0.1993 | 0.1953 | 0.1764 | 0.1595 |
2.6154 | 3 | 0.2068 | 0.2059 | 0.202 | 0.1866 | 0.157 |
3.6154 | 4 | 0.2068 | 0.2059 | 0.2020 | 0.1866 | 0.1570 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for akashmaggon/bge-base-financial-matryoshka-finetuning-tcz-1
Base model
BAAI/bge-base-en-v1.5Evaluation results
- Cosine Accuracy@1 on dim 768self-reported0.000
- Cosine Accuracy@3 on dim 768self-reported0.078
- Cosine Accuracy@5 on dim 768self-reported0.402
- Cosine Accuracy@10 on dim 768self-reported0.520
- Cosine Precision@1 on dim 768self-reported0.000
- Cosine Precision@3 on dim 768self-reported0.026
- Cosine Precision@5 on dim 768self-reported0.080
- Cosine Precision@10 on dim 768self-reported0.052
- Cosine Recall@1 on dim 768self-reported0.000
- Cosine Recall@3 on dim 768self-reported0.078