Edit model card

SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

It has been finetuned on a range of Q&A pairs based of UK government policy documents.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("AndreasThinks/all-MiniLM-L6-v2_policy_doc_finetune")
# Run inference
sentences = [
    'How much funding has the government committed to expand the Public Sector Fraud Authority to deploy AI in combating fraud?',
    '2) Embracing the opportunities presented by making greater use of cutting-edge technology, such as AI, across the public sector. The government is:\nMore than doubling the size of i.AI, the AI incubator team, ensuring that the UK government has the in-house expertise consisting of the most talented technology professionals in the UK, who can apply their skills and expertise to appropriately seize the benefits of AI across the public sector and Civil Service.\nCommitting £34 million to expand the Public Sector Fraud Authority by deploying AI to help combat fraud across the public sector, making it easier to spot, stop and catch fraudsters thereby saving £100 million for the public purse.\nCommitting £17 million to accelerate DWP’s digital transformation, replacing paper-based processes with simplified online services, such as a new system for the Child Maintenance Service.\nCommitting £14 million for public sector research and innovation infrastructure. This includes funding to develop the next generation of health and security technologies, unlocking productivity improvements in the public and private sector alike.\n3) Strengthening preventative action to reduce demand on public services. The government is:\nCommitting an initial £105 million towards a wave of 15 new special free schools to create over 2,000 additional places for children with special educational needs and disabilities (SEND) across England. This will help more children receive a world-class education and builds on the significant levels of capital funding for SEND invested at the 2021 Spending Review. The locations of these special free schools will be announced by May 2024.\nConfirming the location of 20 Alternative Provision (AP) free schools, which will create over 1,600 additional AP places across England as part of the Spending Review 2021 commitment to invest £2.6 billion capital in high needs provision. This will support early intervention, helping improve outcomes for children requiring alternative provision, and helping them to fulfil their potential.',
    'We will help build the UKDev (UK International Development) approach and brand by leveraging the UK’s comparative advantage within both the public and private sectors. We will build first and foremost on existing successful partnerships, through which we share UK models and expertise to support digital transformation in partner countries. For example, through our collaboration with the British Standards Institution (BSI) we will expand our collaboration to build the capacity of partner countries in Africa and South-East Asia (including through ASEAN) on digital standards, working with local private sector and national standards-setting bodies.\nWe will strengthen our delivery of peer learning activities in collaboration with Ofcom, exchanging experiences and sharing the UK models on spectrum management, local networks and other technical areas with telecoms regulators in partner countries, building on the positive peer-learning experience with Kenya and South Africa.\nWe will collaborate with Government Digital Service (GDS) to share know-how with partner countries on digitalisation in the public sector, building on our advisory role in GovStack[footnote 56]. We will leverage the UK experience of DPI for public or regulated services (health, transport, banking, land registries) based on the significant demand for this expertise from developing countries and riding the momentum on DPI generated by the G20 India presidency of 2023.\n 6.4 Enhancing FCDO’s digital development capability\nThe UK government will also enhance its own digital development capability to keep up with the pace of technological change, to be forward-looking and anticipate emergent benefits and risks of digital transformation. We will invest in new research on digital technologies and on their inclusive business models to build the global evidence base, share lessons learned and improve knowledge management through our portfolio of digital development and technology programmes, including the FCDO’s new Technology Centre for Expertise (Tech CoE), which will complement and support our programming portfolio.\nSince all sectors within international development are underpinned by digital technologies, we will ensure that digital development skills are mainstreamed across the FCDO. We will raise awareness and upgrade staff knowledge through new training opportunities on best practice in the complex and evolving area of digital development, through partnering with existing FCDO capability initiatives, ie the International Academy’s Development Faculty, the Cyber Network and the International Technology curriculum.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.8601
spearman_cosine 0.8582
pearson_manhattan 0.8605
spearman_manhattan 0.8572
pearson_euclidean 0.8616
spearman_euclidean 0.8582
pearson_dot 0.8601
spearman_dot 0.8582
pearson_max 0.8616
spearman_max 0.8582

Training Details

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • num_train_epochs: 2
  • warmup_ratio: 0.1
  • use_mps_device: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: True
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss sts-dev_spearman_cosine
0.0562 100 0.3598 0.8263 0.8672
0.1124 200 0.1983 0.7948 0.8666
0.1686 300 0.2021 0.7623 0.8666
0.2248 400 0.1844 0.7510 0.8657
0.2811 500 0.1704 0.7575 0.8629
0.3373 600 0.1643 0.7348 0.8641
0.3935 700 0.1808 0.7293 0.8604
0.4497 800 0.1494 0.7232 0.8636
0.5059 900 0.1563 0.7161 0.8634
0.5621 1000 0.1345 0.7115 0.8643
0.6183 1100 0.1344 0.7142 0.8617
0.6745 1200 0.1584 0.7106 0.8622
0.7307 1300 0.1488 0.7130 0.8592
0.7870 1400 0.1391 0.7034 0.8635
0.8432 1500 0.1433 0.7140 0.8614
0.8994 1600 0.1393 0.7067 0.8612
0.9556 1700 0.1644 0.6950 0.8628
1.0118 1800 0.1399 0.7072 0.8594
1.0680 1900 0.12 0.7093 0.8594
1.1242 2000 0.0904 0.7040 0.8587
1.1804 2100 0.082 0.6962 0.8585
1.2366 2200 0.0715 0.6985 0.8593
1.2929 2300 0.0624 0.7233 0.8562
1.3491 2400 0.0725 0.7064 0.8581
1.4053 2500 0.0665 0.7034 0.8570
1.4615 2600 0.0616 0.6940 0.8584
1.5177 2700 0.0703 0.6886 0.8599
1.5739 2800 0.0564 0.6860 0.8603
1.6301 2900 0.0603 0.6962 0.8590
1.6863 3000 0.0729 0.6906 0.8589
1.7426 3100 0.0753 0.6946 0.8579
1.7988 3200 0.0711 0.6909 0.8582
1.8550 3300 0.0743 0.6896 0.8583
1.9112 3400 0.0693 0.6902 0.8581
1.9674 3500 0.0845 0.6904 0.8582

Framework Versions

  • Python: 3.10.13
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.3.1
  • Accelerate: 0.31.0
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
7
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for AndreasThinks/all-MiniLM-L6-v2_policy_doc_finetune

Finetuned
(168)
this model

Evaluation results