UAE-Large-V1-financial-embeddings-matryoshka
This is a sentence-transformers model finetuned from WhereIsAI/UAE-Large-V1. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: WhereIsAI/UAE-Large-V1
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 tokens
- Similarity Function: Cosine Similarity
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("rbhatia46/UAE-Large-V1-financial-rag-matryoshka")
# Run inference
sentences = [
'According to Johnson & Johnson’s 2024 guidance report, their pharmaceutical sector was projected to grow by 7% in 2023 after considering crucial factors like the overall market demand, introduction of new drugs and potential impact of patent expirations.',
'What was the projected growth of Johnson & Johnson’s pharmaceutical sector in 2023?',
'How is JPMorgan Chase & Co. improving its cybersecurity measures?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Dataset:
dim_1024
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.8316 |
cosine_accuracy@3 | 0.9326 |
cosine_accuracy@5 | 0.9663 |
cosine_accuracy@10 | 0.9896 |
cosine_precision@1 | 0.8316 |
cosine_precision@3 | 0.3109 |
cosine_precision@5 | 0.1933 |
cosine_precision@10 | 0.099 |
cosine_recall@1 | 0.8316 |
cosine_recall@3 | 0.9326 |
cosine_recall@5 | 0.9663 |
cosine_recall@10 | 0.9896 |
cosine_ndcg@10 | 0.9114 |
cosine_mrr@10 | 0.8861 |
cosine_map@100 | 0.8866 |
Information Retrieval
- Dataset:
dim_768
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.829 |
cosine_accuracy@3 | 0.9326 |
cosine_accuracy@5 | 0.9663 |
cosine_accuracy@10 | 0.9845 |
cosine_precision@1 | 0.829 |
cosine_precision@3 | 0.3109 |
cosine_precision@5 | 0.1933 |
cosine_precision@10 | 0.0984 |
cosine_recall@1 | 0.829 |
cosine_recall@3 | 0.9326 |
cosine_recall@5 | 0.9663 |
cosine_recall@10 | 0.9845 |
cosine_ndcg@10 | 0.9098 |
cosine_mrr@10 | 0.8854 |
cosine_map@100 | 0.8863 |
Information Retrieval
- Dataset:
dim_512
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.8238 |
cosine_accuracy@3 | 0.9378 |
cosine_accuracy@5 | 0.9637 |
cosine_accuracy@10 | 0.9845 |
cosine_precision@1 | 0.8238 |
cosine_precision@3 | 0.3126 |
cosine_precision@5 | 0.1927 |
cosine_precision@10 | 0.0984 |
cosine_recall@1 | 0.8238 |
cosine_recall@3 | 0.9378 |
cosine_recall@5 | 0.9637 |
cosine_recall@10 | 0.9845 |
cosine_ndcg@10 | 0.9085 |
cosine_mrr@10 | 0.8836 |
cosine_map@100 | 0.8844 |
Information Retrieval
- Dataset:
dim_256
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.8212 |
cosine_accuracy@3 | 0.9326 |
cosine_accuracy@5 | 0.9611 |
cosine_accuracy@10 | 0.9793 |
cosine_precision@1 | 0.8212 |
cosine_precision@3 | 0.3109 |
cosine_precision@5 | 0.1922 |
cosine_precision@10 | 0.0979 |
cosine_recall@1 | 0.8212 |
cosine_recall@3 | 0.9326 |
cosine_recall@5 | 0.9611 |
cosine_recall@10 | 0.9793 |
cosine_ndcg@10 | 0.9051 |
cosine_mrr@10 | 0.8807 |
cosine_map@100 | 0.8817 |
Information Retrieval
- Dataset:
dim_128
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.8187 |
cosine_accuracy@3 | 0.9352 |
cosine_accuracy@5 | 0.9611 |
cosine_accuracy@10 | 0.9793 |
cosine_precision@1 | 0.8187 |
cosine_precision@3 | 0.3117 |
cosine_precision@5 | 0.1922 |
cosine_precision@10 | 0.0979 |
cosine_recall@1 | 0.8187 |
cosine_recall@3 | 0.9352 |
cosine_recall@5 | 0.9611 |
cosine_recall@10 | 0.9793 |
cosine_ndcg@10 | 0.9031 |
cosine_mrr@10 | 0.8782 |
cosine_map@100 | 0.8793 |
Information Retrieval
- Dataset:
dim_64
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.7979 |
cosine_accuracy@3 | 0.9223 |
cosine_accuracy@5 | 0.9585 |
cosine_accuracy@10 | 0.9793 |
cosine_precision@1 | 0.7979 |
cosine_precision@3 | 0.3074 |
cosine_precision@5 | 0.1917 |
cosine_precision@10 | 0.0979 |
cosine_recall@1 | 0.7979 |
cosine_recall@3 | 0.9223 |
cosine_recall@5 | 0.9585 |
cosine_recall@10 | 0.9793 |
cosine_ndcg@10 | 0.8936 |
cosine_mrr@10 | 0.8655 |
cosine_map@100 | 0.8667 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 3,474 training samples
- Columns:
positive
andanchor
- Approximate statistics based on the first 1000 samples:
positive anchor type string string details - min: 15 tokens
- mean: 44.84 tokens
- max: 112 tokens
- min: 8 tokens
- mean: 18.34 tokens
- max: 32 tokens
- Samples:
positive anchor Exxon Mobil faces substantial risk factors including fluctuating market prices for oil and gas, regulatory environment changes and the potential for catastrophic accidents such as oil spills.
What is the key risk factor faced by Exxon Mobil in the energy sector?
Tesla’s remarkable revenue growth in 2023 is largely driven by its robust electric vehicle sales in China and the strong demand for its energy storage products.
What is the main reason behind Tesla’s revenue growth in 2023?
Amazon is expected to see a sales growth of 23% in the next financial year, driven by the increased demand for their ecommerce business and strong growth in AWS. This projection is subject to changes in the market condition and customer spending patterns.
What is the projected sales growth for Amazon in the next financial year?
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 32per_device_eval_batch_size
: 16gradient_accumulation_steps
: 16learning_rate
: 2e-05num_train_epochs
: 4lr_scheduler_type
: cosinewarmup_ratio
: 0.1bf16
: Truetf32
: Trueload_best_model_at_end
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 16eval_accumulation_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 4max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Truelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | dim_1024_cosine_map@100 | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
---|---|---|---|---|---|---|---|---|
0.8807 | 6 | - | 0.8708 | 0.8499 | 0.8647 | 0.8705 | 0.8307 | 0.8700 |
1.4679 | 10 | 0.7358 | - | - | - | - | - | - |
1.9083 | 13 | - | 0.8848 | 0.8724 | 0.8782 | 0.8861 | 0.8617 | 0.8855 |
2.9358 | 20 | 0.1483 | 0.8865 | 0.8793 | 0.8814 | 0.8857 | 0.8667 | 0.8863 |
3.5229 | 24 | - | 0.8866 | 0.8793 | 0.8817 | 0.8844 | 0.8667 | 0.8863 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.6
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 1
Inference API (serverless) is not available, repository is disabled.
Model tree for rbhatia46/UAE-Large-V1-financial-rag-matryoshka
Base model
WhereIsAI/UAE-Large-V1
Finetuned
this model
Evaluation results
- Cosine Accuracy@1 on dim 1024self-reported0.832
- Cosine Accuracy@3 on dim 1024self-reported0.933
- Cosine Accuracy@5 on dim 1024self-reported0.966
- Cosine Accuracy@10 on dim 1024self-reported0.990
- Cosine Precision@1 on dim 1024self-reported0.832
- Cosine Precision@3 on dim 1024self-reported0.311
- Cosine Precision@5 on dim 1024self-reported0.193
- Cosine Precision@10 on dim 1024self-reported0.099
- Cosine Recall@1 on dim 1024self-reported0.832
- Cosine Recall@3 on dim 1024self-reported0.933