|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- squad |
|
- adversarial_qa |
|
language: |
|
- en |
|
metrics: |
|
- exact_match |
|
- f1 |
|
base_model: |
|
- albert/albert-base-v2 |
|
model: xichenn/albert-base-v2-squad |
|
library_name: transformers |
|
model-index: |
|
- name: xichenn/albert-base-v2-squad |
|
results: |
|
- task: |
|
type: question-answering |
|
name: Question Answering |
|
dataset: |
|
name: squad |
|
type: squad |
|
config: plain_text |
|
split: validation |
|
metrics: |
|
- type: exact_match |
|
value: 84.68 |
|
name: Exact Match |
|
verified: true |
|
- type: f1 |
|
value: 91.4 |
|
name: F1 |
|
verified: true |
|
--- |
|
|
|
# albert-base-v2-squad |
|
|
|
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert/albert-base-v2) on the SQuAD 1.1 and adversarial_qa datasets. |
|
It achieves the following results on the SQuAD 1.1 evaluation set: |
|
- Exact Match(EM): 84.68 |
|
- F1: 91.40 |
|
|
|
## Inference API |
|
|
|
You can test the model directly using the Hugging Face Inference API: |
|
|
|
```python |
|
from transformers import pipeline |
|
|
|
# Load the pipeline |
|
qa_pipeline = pipeline("question-answering", model="xichenn/albert-base-v2-squad") |
|
|
|
# Run inference |
|
result = qa_pipeline(question="What is the capital of France?", context="France is a country in Europe. Its capital is Paris.") |
|
|
|
print(result) |
|
``` |