SentenceTransformer based on sentence-transformers/paraphrase-MiniLM-L6-v2
This is a sentence-transformers model finetuned from sentence-transformers/paraphrase-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/paraphrase-MiniLM-L6-v2
- Maximum Sequence Length: 128 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tomasravel/modelo_finetuneadoX")
# Run inference
sentences = [
'buenos aires lomas de zamora dr manuel a de acevedo desde 101 hasta 199',
'buenos aires lomas de zamora ayacucho desde 101 hasta 199',
'buenos aires general pueyrredon ingeniero white rio negro desde 4202 hasta 4300',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
Unnamed Dataset
- Size: 93,864 training samples
- Columns:
sentence_0
,sentence_1
, andlabel
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 label type string string float details - min: 13 tokens
- mean: 20.92 tokens
- max: 29 tokens
- min: 8 tokens
- mean: 19.61 tokens
- max: 29 tokens
- min: 0.5
- mean: 0.7
- max: 1.0
- Samples:
sentence_0 sentence_1 label buenos aires moreno santa teresa de jesus desde 1902 hasta 2000
buenos aires marcos juarez santa teresa de jesus desde 1902 hasta 2000
0.5
buenos aires la plata calle 68 desde 351 hasta 399
buenos aires la plata calle 68 699
0.8894792214623882
buenos aires bahia blanca neuquen desde 1401 hasta 1499
buenos aires bahia blanca neuquen 3099
0.8210941609679117
- Loss:
CoSENTLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size
: 16per_device_eval_batch_size
: 16multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: noprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 3max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseeval_use_gather_object
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | Training Loss |
---|---|---|
0.0852 | 500 | 3.5442 |
0.1704 | 1000 | 2.9896 |
0.2557 | 1500 | 2.7276 |
0.3409 | 2000 | 2.5357 |
0.4261 | 2500 | 2.4514 |
0.5113 | 3000 | 2.2637 |
0.5966 | 3500 | 2.2494 |
0.6818 | 4000 | 2.175 |
0.7670 | 4500 | 2.1082 |
0.8522 | 5000 | 2.127 |
0.9374 | 5500 | 1.9948 |
1.0227 | 6000 | 2.0765 |
1.1079 | 6500 | 2.0432 |
1.1931 | 7000 | 1.9714 |
1.2783 | 7500 | 1.9014 |
1.3636 | 8000 | 1.8878 |
1.4488 | 8500 | 1.8607 |
1.5340 | 9000 | 1.7908 |
1.6192 | 9500 | 1.7575 |
1.7044 | 10000 | 1.7601 |
1.7897 | 10500 | 1.79 |
1.8749 | 11000 | 1.7361 |
1.9601 | 11500 | 1.7299 |
2.0453 | 12000 | 1.7849 |
2.1306 | 12500 | 1.7389 |
2.2158 | 13000 | 1.755 |
2.3010 | 13500 | 1.6725 |
2.3862 | 14000 | 1.6453 |
2.4715 | 14500 | 1.5906 |
2.5567 | 15000 | 1.569 |
2.6419 | 15500 | 1.533 |
2.7271 | 16000 | 1.566 |
2.8123 | 16500 | 1.6377 |
2.8976 | 17000 | 1.5948 |
2.9828 | 17500 | 1.58 |
Framework Versions
- Python: 3.9.12
- Sentence Transformers: 3.0.1
- Transformers: 4.44.2
- PyTorch: 2.2.2
- Accelerate: 0.34.2
- Datasets: 2.21.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
CoSENTLoss
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
- Downloads last month
- 2
Inference API (serverless) is not available, repository is disabled.
Model tree for tomasravel/modelo_finetuneadoX
Finetuned
this model