defiant-cow-743 / README.md
ElMad's picture
stackoverflow_tag_classification/modernBERT_vs_Deberta/ModernBERT-base/defiant-cow-743
b73eb75 verified
|
raw
history blame
2.91 kB
metadata
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
  - generated_from_trainer
model-index:
  - name: defiant-cow-743
    results: []

defiant-cow-743

This model is a fine-tuned version of answerdotai/ModernBERT-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1694
  • Hamming Loss: 0.0587
  • Zero One Loss: 0.3888
  • Jaccard Score: 0.3282
  • Hamming Loss Optimised: 0.0546
  • Hamming Loss Threshold: 0.7112
  • Zero One Loss Optimised: 0.385
  • Zero One Loss Threshold: 0.5227
  • Jaccard Score Optimised: 0.3043
  • Jaccard Score Threshold: 0.3381

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 7.559719999499729e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 2024
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9012137258321917,0.9887626606614206) and epsilon=1e-07 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Hamming Loss Zero One Loss Jaccard Score Hamming Loss Optimised Hamming Loss Threshold Zero One Loss Optimised Zero One Loss Threshold Jaccard Score Optimised Jaccard Score Threshold
No log 1.0 100 0.1620 0.0604 0.475 0.4250 0.0595 0.5386 0.4363 0.3884 0.3294 0.3229
No log 2.0 200 0.1549 0.0563 0.3862 0.3276 0.0561 0.6040 0.3875 0.5045 0.3064 0.3565
No log 3.0 300 0.1694 0.0587 0.3888 0.3282 0.0546 0.7112 0.385 0.5227 0.3043 0.3381

Framework versions

  • Transformers 4.48.0.dev0
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.21.0