roberta for Extractive QA
This is the roberta-base model, fine-tuned using the SQuAD2.0 dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Extractive Question Answering.
Overview
Language model: roberta-base
Language: English
Downstream-task: Extractive QA
Training data: SQuAD 2.0
Eval data: SQuAD 2.0
- Downloads last month
- 113
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.