--- license: apache-2.0 datasets: - squad - adversarial_qa language: - en metrics: - exact_match - f1 base_model: - albert/albert-base-v2 pipeline_tag: question-answering model: xichenn/albert-base-v2-squad model-index: - name: xichenn/albert-base-v2-squad results: - task: type: question-answering name: Question Answering dataset: name: squad type: squad config: plain_text split: validation metrics: - type: exact_match value: 84.68 name: Exact Match verified: true - type: f1 value: 91.40 name: F1 verified: true --- # albert-base-v2-squad This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert/albert-base-v2) on the SQuAD 1.1 and adversarial_qa datasets. It achieves the following results on the SQuAD 1.1 evaluation set: - Exact Match(EM): 84.68 - F1: 91.40 ## Inference API Here’s how to use the model for question answering: ```python from transformers import pipeline # Load the pipeline qa_pipeline = pipeline("question-answering", model="xichenn/albert-base-v2-squad") # Run inference result = qa_pipeline({ "question": "What is the capital of France?", "context": "France is a country in Europe. Its capital is Paris.", }) print(result)