SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a sentence-transformers model finetuned from sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("T-Blue/tsdae_pro_MiniLM_L12_2")
# Run inference
sentences = [
'ब𑀫𑁣𑀳प 𑀢𑀢 𑀳𑀫𑀢𑀟𑁦 𑀠च𑀲𑀢 𑀠च𑀫𑀢𑀠𑀠च𑀟त𑀢𑀦 पच𑀢𑀠च𑀞𑁣𑀟 𑀣च 𑀲च𑀳चलनललन𑀞च णच𑀟च ढच 𑀠च𑀤चन𑀟च त𑀢𑀞𑀢𑀟 𑀫च𑀟𑀞चल𑀢 णचण𑀢𑀟',
'च𑀠𑀢𑀟पचतत𑀢णच च त𑀢𑀞𑀢𑀟 ब𑀫𑁣𑀳प 𑀳𑁦𑀪𑀢𑁦𑀳 𑀢𑀢 𑀳𑀫𑀢𑀟𑁦 𑀠च𑀲𑀢 𑀠च𑀫𑀢𑀠𑀠च𑀟त𑀢𑀦 पच𑀪𑁦 𑀣च ञ𑀢𑀠ढ𑀢𑀟 𑀢𑀟बच𑀟पचपपन𑀟 प𑀳च𑀪𑀢𑀟 पच𑀢𑀠च𑀞𑁣𑀟 𑀣𑀢𑀪𑁦ढच 𑀣च 𑀲च𑀳चलनललन𑀞च 𑀟च च𑀠𑀢𑀟त𑀢𑀦 णच𑀟च ढच 𑀠च𑀤चन𑀟च त𑀢𑀞𑀢𑀟 𑀞𑀱च𑀟त𑀢णच𑀪 𑀫च𑀟𑀞चल𑀢 णचण𑀢𑀟 पच𑀲𑀢णच𑀪𑀳न𑀯',
'प𑁣ध𑀳ण ध𑀫𑀢𑀪𑀢 𑀝च𑀟 𑀫च𑀢𑀲𑁦 𑀳𑀫𑀢 च 𑀪च𑀟च𑀪 𑀭𑀭 बच 𑀱चपच𑀟 चबन𑀳पच 𑀭थ𑀗𑀧𑀮 ञच𑀟 𑀱च𑀳च𑀟 ढच𑀣𑀠𑀢𑀟प𑁣𑀟 ञच𑀟 𑀤च𑀠ढ𑀢च 𑀟𑁦𑀯',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
Unnamed Dataset
- Size: 64,000 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 type string string details - min: 4 tokens
- mean: 37.72 tokens
- max: 292 tokens
- min: 4 tokens
- mean: 90.07 tokens
- max: 512 tokens
- Samples:
sentence_0 sentence_1 𑀞न𑀣न ढ𑀢𑀪𑀟𑀢𑀟𑀦𑀞न𑀳च प𑁦𑀞न𑀟
प𑁦𑀞न𑀟 पचबच णच𑀟च 𑀞न𑀣न 𑀣च ढ𑀢𑀪𑀟𑀢𑀟𑀦𑀞न𑀳च 𑀣च प𑁦𑀞न𑀟 पचत𑀫𑁣बच𑀯
च त𑀢ढ𑀢ण𑁣ण𑀢𑀟 𑀳च𑀣च𑀪𑀱च𑀪 𑀳न झच𑀪च 𑀠चप𑀳चण𑀢𑀟
चढ𑁣𑀞च𑀢𑀞च𑀠च𑀪 च णच𑀱च𑀟त𑀢𑀟 त𑀢ढ𑀢ण𑁣ण𑀢𑀟 𑀳च𑀣च𑀪𑀱च𑀪 𑀘च𑀠च𑀙च𑀦 𑀠च𑀳न च𑀠𑀲च𑀟𑀢 𑀤च 𑀳न 𑀢णच झच𑀪च 𑀠नपच𑀟𑁦 च 𑀠चप𑀳चण𑀢𑀟 चढ𑁣𑀞च𑀟𑀳न𑀯
𑀣च बन𑀣न𑀠𑀠च𑀱च 𑀘च𑀪𑀢𑀣न𑀟 𑀠न𑀘चललन पच 𑀯
पच ढच 𑀣च बन𑀣न𑀠𑀠च𑀱च बच 𑀘च𑀪𑀢𑀣न𑀟 च𑀟च𑀪त𑀫𑀢𑀳प 𑀣चढच𑀟ष𑀣चढच𑀟 𑀣च 𑀠न𑀘चललन 𑀠च𑀳न चलचझच 𑀣च झन𑀟ब𑀢णच𑀪 𑀠च𑀙च𑀢𑀞चपच 𑀙णच𑀟त𑀢 पच 𑀘च𑀠न𑀳 𑀯
- Loss:
DenoisingAutoEncoderLoss
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size
: 16per_device_eval_batch_size
: 16multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: noprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 3max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | Training Loss |
---|---|---|
0.125 | 500 | 2.5392 |
0.25 | 1000 | 1.4129 |
0.375 | 1500 | 1.3383 |
0.5 | 2000 | 1.288 |
0.625 | 2500 | 1.2627 |
0.75 | 3000 | 1.239 |
0.875 | 3500 | 1.2208 |
1.0 | 4000 | 1.2041 |
1.125 | 4500 | 1.1743 |
1.25 | 5000 | 1.1633 |
1.375 | 5500 | 1.1526 |
1.5 | 6000 | 1.1375 |
1.625 | 6500 | 1.1313 |
1.75 | 7000 | 1.1246 |
1.875 | 7500 | 1.1162 |
2.0 | 8000 | 1.1096 |
2.125 | 8500 | 1.0876 |
2.25 | 9000 | 1.0839 |
2.375 | 9500 | 1.0791 |
2.5 | 10000 | 1.0697 |
2.625 | 10500 | 1.0671 |
2.75 | 11000 | 1.0644 |
2.875 | 11500 | 1.0579 |
3.0 | 12000 | 1.0528 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.33.0
- Datasets: 2.18.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
DenoisingAutoEncoderLoss
@inproceedings{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
pages = "671--688",
url = "https://arxiv.org/abs/2104.06979",
}
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.