Question Answering with Hugging Face Transformers and Keras 🤗❤️
This model is a fine-tuned version of distilbert-base-cased on SQuAD dataset. It achieves the following results on the evaluation set:
- Train Loss: 0.9300
- Validation Loss: 1.1437
- Epoch: 1
Model description
Question answering model based on distilbert-base-cased, trained with 🤗Transformers + ❤️Keras.
Intended uses & limitations
This model is trained for Question Answering tutorial for Keras.io.
Training and evaluation data
It is trained on SQuAD question answering dataset. ⁉️
Training procedure
Find the notebook in Keras Examples here. ❤️
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: mixed_float16
Training results
Train Loss | Validation Loss | Epoch |
---|---|---|
1.5145 | 1.1500 | 0 |
0.9300 | 1.1437 | 1 |
Framework versions
- Transformers 4.16.0.dev0
- TensorFlow 2.6.0
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
- Downloads last month
- 83
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for keras-io/transformers-qa
Base model
distilbert/distilbert-base-cased