SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("ivanleomk/finetuned-bge-bai")
# Run inference
sentences = [
'\nName : CloudFlare Inc.\nCategory: Internet & Network Services, SaaS\nDepartment: IT Operations\nLocation: New York, NY\nAmount: 2000.0\nCard: Annual Cloud Services Budget\nTrip Name: unknown\n',
'\nName : TelecomMastery Solutions\nCategory: Cloud Infrastructure & Hosting, Telecommunications Services\nDepartment: IT Operations\nLocation: Zurich, Switzerland\nAmount: 1583.45\nCard: Global Connectivity Enhancement\nTrip Name: unknown\n',
'\nName : Nimbus Streamline\nCategory: Cloud Services, Internet Infrastructure\nDepartment: IT Operations\nLocation: Berlin, Germany\nAmount: 1376.49\nCard: Distributed Server Management\nTrip Name: unknown\n',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Triplet
- Dataset:
ramp-finetune-eval
- Evaluated with
TripletEvaluator
Metric | Value |
---|---|
cosine_accuracy | 0.0 |
dot_accuracy | 0.0 |
manhattan_accuracy | 0.0 |
euclidean_accuracy | 0.0 |
max_accuracy | 0.0 |
Triplet
- Dataset:
ramp-finetune-eval
- Evaluated with
TripletEvaluator
Metric | Value |
---|---|
cosine_accuracy | 0.0 |
dot_accuracy | 0.0 |
manhattan_accuracy | 0.0 |
euclidean_accuracy | 0.0 |
max_accuracy | 0.0 |
Triplet
- Dataset:
ramp-finetune-test
- Evaluated with
TripletEvaluator
Metric | Value |
---|---|
cosine_accuracy | 0.0 |
dot_accuracy | 0.0 |
manhattan_accuracy | 0.0 |
euclidean_accuracy | 0.0 |
max_accuracy | 0.0 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 208 training samples
- Columns:
sentence
andlabel
- Approximate statistics based on the first 208 samples:
sentence label type string int details - min: 33 tokens
- mean: 39.66 tokens
- max: 48 tokens
- 0: ~4.81%
- 1: ~5.29%
- 2: ~6.25%
- 3: ~2.40%
- 4: ~3.85%
- 5: ~4.33%
- 6: ~3.85%
- 7: ~2.40%
- 8: ~4.81%
- 9: ~3.37%
- 10: ~3.85%
- 11: ~3.85%
- 12: ~4.81%
- 13: ~4.81%
- 14: ~5.29%
- 15: ~3.37%
- 16: ~4.81%
- 17: ~4.33%
- 18: ~3.85%
- 19: ~1.92%
- 20: ~2.88%
- 21: ~2.88%
- 22: ~3.37%
- 23: ~0.96%
- 24: ~4.33%
- 25: ~2.40%
- 26: ~0.96%
- Samples:
sentence label
Name : Global Insights Group
Category: Subscriptions & Memberships, Data Services & Analytics
Department: Marketing
Location: London, UK
Amount: 1245.67
Card: Marketing Intelligence Fund
Trip Name: unknown0
Name : CyberGuard Provisions
Category: Security Software Solutions, Data Protection Services
Department: Information Security
Location: San Francisco, CA
Amount: 879.92
Card: Digital Fortress Action Plan
Trip Name: unknown1
Name : Apex Innovations Group
Category: Business Consulting, Training Services
Department: Executive
Location: Sydney, Australia
Amount: 1575.34
Card: Leadership Development Program
Trip Name: unknown2
- Loss:
BatchSemiHardTripletLoss
Evaluation Dataset
Unnamed Dataset
- Size: 66 evaluation samples
- Columns:
sentence
andlabel
- Approximate statistics based on the first 66 samples:
sentence label type string int details - min: 35 tokens
- mean: 39.89 tokens
- max: 45 tokens
- 0: ~1.52%
- 1: ~4.55%
- 2: ~4.55%
- 3: ~7.58%
- 5: ~6.06%
- 6: ~4.55%
- 7: ~1.52%
- 8: ~3.03%
- 9: ~1.52%
- 10: ~6.06%
- 11: ~1.52%
- 13: ~4.55%
- 14: ~4.55%
- 17: ~6.06%
- 18: ~4.55%
- 19: ~6.06%
- 20: ~3.03%
- 21: ~1.52%
- 22: ~7.58%
- 23: ~7.58%
- 24: ~3.03%
- 25: ~4.55%
- 26: ~4.55%
- Samples:
sentence label
Name : Skyline Digital Solutions
Category: Cloud Management Services, Internet & Network Services
Department: IT Operations
Location: Sydney, Australia
Amount: 1128.37
Card: Global Networking Project
Trip Name: unknown14
Name : Global Assurance Solutions
Category: Enterprise Risk Management, Strategic Business Advisory
Department: Finance
Location: Zurich, Switzerland
Amount: 1358.92
Card: Comprehensive Risk Assessment Framework
Trip Name: unknown6
Name : Nihon Global Ventures
Category: Consulting Services, Technology Implementation
Department: IT Operations
Location: Tokyo, Japan
Amount: 1453.17
Card: Network Optimization Program
Trip Name: unknown18
- Loss:
BatchSemiHardTripletLoss
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16learning_rate
: 2e-05num_train_epochs
: 1warmup_ratio
: 0.1fp16
: Truebatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseeval_use_gather_object
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | ramp-finetune-eval_max_accuracy | ramp-finetune-test_max_accuracy |
---|---|---|---|
0 | 0 | 0.0 | - |
1.0 | 13 | - | 0.0 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.5.0+cu121
- Accelerate: 0.34.2
- Datasets: 3.1.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
BatchSemiHardTripletLoss
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for ivanleomk/finetuned-bge-bai
Base model
BAAI/bge-base-en-v1.5Evaluation results
- Cosine Accuracy on ramp finetune evalself-reported0.000
- Dot Accuracy on ramp finetune evalself-reported0.000
- Manhattan Accuracy on ramp finetune evalself-reported0.000
- Euclidean Accuracy on ramp finetune evalself-reported0.000
- Max Accuracy on ramp finetune evalself-reported0.000
- Cosine Accuracy on ramp finetune evalself-reported0.000
- Dot Accuracy on ramp finetune evalself-reported0.000
- Manhattan Accuracy on ramp finetune evalself-reported0.000
- Euclidean Accuracy on ramp finetune evalself-reported0.000
- Max Accuracy on ramp finetune evalself-reported0.000