Approche 5
Collection
Modèles CATIE
•
2 items
•
Updated
This is a sentence-transformers model finetuned from intfloat/multilingual-e5-large. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("bourdoiscatie/multilingual-e5-large-approche5")
# Run inference
sentences = [
"Tenet est sous surveillance depuis novembre, lorsque l'ancien directeur général Jeffrey Barbakow a déclaré que la société a utilisé des prix agressifs pour déclencher des paiements plus élevés pour les patients les plus malades de l'assurance maladie.",
"En novembre, Jeffrey Brabakow, le directeur général de l'époque, a déclaré que la société utilisait des prix agressifs pour obtenir des paiements plus élevés pour les patients les plus malades de l'assurance maladie.",
'La femme est en route pour un rendez-vous.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
eval_strategy
: epochlearning_rate
: 1e-05weight_decay
: 0.01num_train_epochs
: 1batch_sampler
: no_duplicatesoverwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 8per_device_eval_batch_size
: 8per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 1e-05weight_decay
: 0.01adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
: auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportionalEpoch | Step | Training Loss | nli loss | sts loss | triplet loss |
---|---|---|---|---|---|
0.0137 | 500 | 2.3683 | - | - | - |
0.0273 | 1000 | 2.2564 | - | - | - |
0.0410 | 1500 | 2.3976 | - | - | - |
0.0547 | 2000 | 2.1925 | - | - | - |
0.0684 | 2500 | 2.1542 | - | - | - |
0.0820 | 3000 | 2.0945 | - | - | - |
0.0957 | 3500 | 2.1411 | - | - | - |
0.1094 | 4000 | 1.9079 | - | - | - |
0.1231 | 4500 | 1.7574 | - | - | - |
0.1367 | 5000 | 2.1923 | - | - | - |
0.1504 | 5500 | 2.0054 | - | - | - |
0.1641 | 6000 | 1.6717 | - | - | - |
0.1778 | 6500 | 1.7374 | - | - | - |
0.1914 | 7000 | 2.0042 | - | - | - |
0.2051 | 7500 | 1.7486 | - | - | - |
0.2188 | 8000 | 1.5635 | - | - | - |
0.2324 | 8500 | 1.8133 | - | - | - |
0.2461 | 9000 | 1.7885 | - | - | - |
0.2598 | 9500 | 1.6298 | - | - | - |
0.2735 | 10000 | 1.3568 | - | - | - |
0.2871 | 10500 | 1.8475 | - | - | - |
0.3008 | 11000 | 1.7642 | - | - | - |
0.3145 | 11500 | 1.4048 | - | - | - |
0.3282 | 12000 | 1.3782 | - | - | - |
0.3418 | 12500 | 1.8164 | - | - | - |
0.3555 | 13000 | 1.5559 | - | - | - |
0.3692 | 13500 | 1.2515 | - | - | - |
0.3828 | 14000 | 1.4736 | - | - | - |
0.3965 | 14500 | 1.5527 | - | - | - |
0.4102 | 15000 | 1.384 | - | - | - |
0.4239 | 15500 | 1.167 | - | - | - |
0.4375 | 16000 | 1.6116 | - | - | - |
0.4512 | 16500 | 1.5668 | - | - | - |
0.4649 | 17000 | 1.1458 | - | - | - |
0.4786 | 17500 | 1.1103 | - | - | - |
0.4922 | 18000 | 1.6152 | - | - | - |
0.5059 | 18500 | 1.347 | - | - | - |
0.5196 | 19000 | 1.1 | - | - | - |
0.5333 | 19500 | 1.2662 | - | - | - |
0.5469 | 20000 | 1.456 | - | - | - |
0.5606 | 20500 | 1.1928 | - | - | - |
0.5743 | 21000 | 0.9972 | - | - | - |
0.5879 | 21500 | 1.4499 | - | - | - |
0.6016 | 22000 | 1.3264 | - | - | - |
0.6153 | 22500 | 1.003 | - | - | - |
0.6290 | 23000 | 1.0512 | - | - | - |
0.6426 | 23500 | 1.3041 | - | - | - |
0.6563 | 24000 | 1.1227 | - | - | - |
0.6700 | 24500 | 0.9579 | - | - | - |
0.6837 | 25000 | 1.1196 | - | - | - |
0.6973 | 25500 | 1.1362 | - | - | - |
0.7110 | 26000 | 1.0376 | - | - | - |
0.7247 | 26500 | 0.8037 | - | - | - |
0.7384 | 27000 | 1.2622 | - | - | - |
0.7520 | 27500 | 1.1696 | - | - | - |
0.7657 | 28000 | 0.8923 | - | - | - |
0.7794 | 28500 | 0.8389 | - | - | - |
0.7930 | 29000 | 1.2655 | - | - | - |
0.8067 | 29500 | 0.965 | - | - | - |
0.8204 | 30000 | 0.8043 | - | - | - |
0.8341 | 30500 | 1.0491 | - | - | - |
0.8477 | 31000 | 1.1186 | - | - | - |
0.8614 | 31500 | 0.8794 | - | - | - |
0.8751 | 32000 | 0.7776 | - | - | - |
0.8888 | 32500 | 1.1299 | - | - | - |
0.9024 | 33000 | 0.9544 | - | - | - |
0.9161 | 33500 | 0.7195 | - | - | - |
0.9298 | 34000 | 0.8298 | - | - | - |
0.9434 | 34500 | 1.0767 | - | - | - |
0.9571 | 35000 | 0.8287 | - | - | - |
0.9708 | 35500 | 0.7331 | - | - | - |
0.9845 | 36000 | 0.904 | - | - | - |
0.9981 | 36500 | 0.9645 | - | - | - |
1.0 | 36568 | - | 0.0193 | 5.4479 | 0.5933 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
Base model
intfloat/multilingual-e5-large