Model Card for Model ID
Fine-tuned distilbert model. Trained on train set of JNLPBA dataset taken from BLURB.
Model Details
Model Sources [optional]
Training Details
Training Data
Train set of JNLPBA dataset.
Training Procedure
Classical fine-tuning.
Training Hyperparameters
- Training regime: [More Information Needed]
learning_rate=5e-5 per_device_train_batch_size=16 per_device_eval_batch_size=16 num_train_epochs=3 weight_decay=0.01
Evaluation
Testing Data
Test set of JNLPBA dataset.
Results
Precision: 0.73 Recall: 0.83 Micro-F1: 0.78
Environmental Impact
- Hardware Type: 1xRTX A4000
- Hours used: 00:19:00
- Downloads last month
- 3
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for kbulutozler/distilbert-base-uncased-FT-ner-JNLPBA
Base model
distilbert/distilbert-base-uncased