SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("datasocietyco/bge-base-en-v1.5-course-recommender-v3")
# Run inference
sentences = [
'Intro to JavaScript: Basic Concepts',
'A course that finalizes the series of introductory JavaScript courses and introduces the students to the basic concepts in the JavaScript ecosystem.',
'Course language: HTML, JavaScript',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
Unnamed Dataset
- Size: 146 training samples
- Columns:
name
,description
,languages
,prerequisites
,target_audience
, andmerged
- Approximate statistics based on the first 146 samples:
name description languages prerequisites target_audience merged type string string string string string string details - min: 3 tokens
- mean: 7.12 tokens
- max: 16 tokens
- min: 13 tokens
- mean: 41.41 tokens
- max: 117 tokens
- min: 6 tokens
- mean: 6.65 tokens
- max: 10 tokens
- min: 8 tokens
- mean: 12.69 tokens
- max: 21 tokens
- min: 5 tokens
- mean: 23.17 tokens
- max: 54 tokens
- min: 45 tokens
- mean: 83.04 tokens
- max: 174 tokens
- Samples:
name description languages prerequisites target_audience merged Introduction to Statistics
This course is designed for learners who would like to learn about statistics and apply it for decision-making. This course is a comprehensive review of statistical terms ranging from foundational (mean, median, mode, standard deviation, variance, covariance, correlation) to more complex concepts such as normality in data, confidence intervals, and p-values. Additional topics include how to calculate summary statistics and how to carry out hypothesis testing to inform decisions.
Course language: Python
Prerequisite course required: Intro to Visualization in Python
Professionals some Python experience who would like to expand their skill set to more advanced Python visualization techniques and tools.
Introduction to Statistics This course is designed for learners who would like to learn about statistics and apply it for decision-making. This course is a comprehensive review of statistical terms ranging from foundational (mean, median, mode, standard deviation, variance, covariance, correlation) to more complex concepts such as normality in data, confidence intervals, and p-values. Additional topics include how to calculate summary statistics and how to carry out hypothesis testing to inform decisions. Course language: Python Prerequisite course required: Intro to Visualization in Python Professionals some Python experience who would like to expand their skill set to more advanced Python visualization techniques and tools.
Statistics & Probability
This course is designed for learners who would like to learn about statistics and apply it for decision-making. This course is a comprehensive review of advanced statistics topics on probability like permutations and combinations, joint probability, conditional probability, marginal probability, and Bayes' theorem that provides a way to revise existing predictions or update probabilities given new or additional evidence.
Course language: Python
Prerequisite course required: Intermediate Statistics
Professionals some Python experience who would like to expand their skill set to more advanced Python visualization techniques and tools.
Statistics & Probability This course is designed for learners who would like to learn about statistics and apply it for decision-making. This course is a comprehensive review of advanced statistics topics on probability like permutations and combinations, joint probability, conditional probability, marginal probability, and Bayes' theorem that provides a way to revise existing predictions or update probabilities given new or additional evidence. Course language: Python Prerequisite course required: Intermediate Statistics Professionals some Python experience who would like to expand their skill set to more advanced Python visualization techniques and tools.
Databases: Advanced Relational
A deeper dive into the many capabilities of a relational database, how to optimize usage and make sure that your are getting the most use out of your database so that you have a strong base for your applications.
Course language: SQL
Prerequisite course required: Databases: Relational
Professionals who would like to improve on their knowledge of relational databases.
Databases: Advanced Relational A deeper dive into the many capabilities of a relational database, how to optimize usage and make sure that your are getting the most use out of your database so that you have a strong base for your applications. Course language: SQL Prerequisite course required: Databases: Relational Professionals who would like to improve on their knowledge of relational databases.
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Evaluation Dataset
Unnamed Dataset
- Size: 37 evaluation samples
- Columns:
name
,description
,languages
,prerequisites
,target_audience
, andmerged
- Approximate statistics based on the first 37 samples:
name description languages prerequisites target_audience merged type string string string string string string details - min: 4 tokens
- mean: 6.84 tokens
- max: 13 tokens
- min: 14 tokens
- mean: 36.92 tokens
- max: 84 tokens
- min: 6 tokens
- mean: 6.7 tokens
- max: 10 tokens
- min: 8 tokens
- mean: 12.05 tokens
- max: 18 tokens
- min: 13 tokens
- mean: 23.3 tokens
- max: 48 tokens
- min: 47 tokens
- mean: 77.81 tokens
- max: 124 tokens
- Samples:
name description languages prerequisites target_audience merged Understanding Different OS Concepts
A course that builds foundational knowledge of what an operating system is. It walks through the different core concepts of OS and its inner workings.
Course language: TBD
Prerequisite course required: Domain & Hosting
Professionals who would like to learn the core concepts of Operating system
Understanding Different OS Concepts A course that builds foundational knowledge of what an operating system is. It walks through the different core concepts of OS and its inner workings. Course language: TBD Prerequisite course required: Domain & Hosting Professionals who would like to learn the core concepts of Operating system
Basic GraphQL: Node.js
An introduction to GraphQL, what it is good for and how to use it to query or change data.
Course language: JavaScript
Prerequisite course required: JSON APIs: Node.js
Professionals who would like to learn the core concepts of GraphQL, using Node.js
Basic GraphQL: Node.js An introduction to GraphQL, what it is good for and how to use it to query or change data. Course language: JavaScript Prerequisite course required: JSON APIs: Node.js Professionals who would like to learn the core concepts of GraphQL, using Node.js
Deep Learning for Text Analysis
This course continues on tackling topics in deep learning that address specific problem types. In this course students will be getting to know RNNs and LSTMs - types of neural networks that are often used for solving problems in text analysis.
Course language: Python
Prerequisite course required: Neural Networks & Deep Learning
Professionals who would like to get a base-level understanding of the recurrent neural networks, their subtypes, and their application in text analysis.
Deep Learning for Text Analysis This course continues on tackling topics in deep learning that address specific problem types. In this course students will be getting to know RNNs and LSTMs - types of neural networks that are often used for solving problems in text analysis. Course language: Python Prerequisite course required: Neural Networks & Deep Learning Professionals who would like to get a base-level understanding of the recurrent neural networks, their subtypes, and their application in text analysis.
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16learning_rate
: 3e-06max_steps
: 64warmup_ratio
: 0.1batch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 3e-06weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 3.0max_steps
: 64lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | loss |
---|---|---|---|
2.0 | 20 | 1.4618 | 1.0396 |
4.0 | 40 | 0.8698 | 0.8235 |
6.0 | 60 | 0.8096 | 0.7544 |
Framework Versions
- Python: 3.11.7
- Sentence Transformers: 3.1.1
- Transformers: 4.45.1
- PyTorch: 2.2.2
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.20.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for datasocietyco/bge-base-en-v1.5-course-recommender-v3
Base model
BAAI/bge-base-en-v1.5