chintaaaaaan's picture
Update README.md
6c9a441 verified
metadata
library_name: transformers
tags:
  - generated_from_keras_callback
model-index:
  - name: DistilBERT-base-uncased-english-finetuned-squad
    results: []
datasets:
  - rajpurkar/squad
language:
  - en
base_model:
  - distilbert/distilbert-base-uncased
pipeline_tag: question-answering

DistilBERT-base-uncased-english-finetuned-squad

This model was finetuned on squad dataset. Use TFDistilBertForQuestionAnswering to import model. Requires DistilBertTokenizerFast to generate tokens that are accepted by this model.

Model description

Base DistilBERT model finetuned using squad dataset for NLP tasks such as context based question answering.

Training procedure

Trained for 3 epochs.

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: Adam with learning_rate=5e-5
  • training_precision: float32

Training results

Loss on final epoch: 0.6417 & validation loss: 1.2772 evaluation yet to be done.

Framework versions

  • Transformers 4.44.2
  • TensorFlow 2.17.0
  • Datasets 3.0.0
  • Tokenizers 0.19.1