Electra base model for QA (SQuAD 2.0)
This model uses electra-base.
Training Data
The models have been trained on the SQuAD 2.0 dataset.
It can be used for question answering task.
Usage and Performance
The trained model can be used like this:
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
# Load model & tokenizer
electra_model = AutoModelForQuestionAnswering.from_pretrained('navteca/electra-base-squad2')
electra_tokenizer = AutoTokenizer.from_pretrained('navteca/electra-base-squad2')
# Get predictions
nlp = pipeline('question-answering', model=electra_model, tokenizer=electra_tokenizer)
result = nlp({
'question': 'How many people live in Berlin?',
'context': 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'
})
print(result)
#{
# "answer": "3,520,031"
# "end": 36,
# "score": 0.99983448,
# "start": 27,
#}
- Downloads last month
- 5
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.