--- license: apache-2.0 datasets: - Pravesh390/country-capital-mixed language: - en library_name: transformers pipeline_tag: text2text-generation tags: - qlora - flan-t5 - prompt-tuning - question-answering - hallucination - robust-qa - country-capital model-index: - name: flan-t5-qlora-countryqa-v1 results: - task: type: text-generation name: Text Generation dataset: type: Pravesh390/country-capital-mixed name: Country-Capital Mixed QA metrics: - type: bleu value: 92.5 - type: rouge value: 87.3 --- # 🧠 FLAN-T5 QLoRA (Prompt Tuned) - Country Capital QA This model is a fine-tuned version of `google/flan-t5-base` using **QLoRA** and **Prompt Tuning** on a hybrid QA dataset. ## 📌 Highlights - 🔍 Correct & incorrect (hallucinated) QA pairs - ⚙️ Trained using 4-bit QLoRA with PEFT - 🔧 Prompt tuning enables parameter-efficient adaptation ## 🏗️ Training - Base Model: `google/flan-t5-base` - Method: **QLoRA** + **Prompt Tuning** with PEFT - Quantization: 4-bit NF4 - Frameworks: 🤗 Transformers, PEFT, Accelerate - Evaluation: BLEU = 92.5, ROUGE = 87.3 ## 📚 Dataset Mixture of 20 correct and 3 incorrect QA samples from `Pravesh390/country-capital-mixed`. ## 📦 Usage ```python from transformers import pipeline pipe = pipeline("text2text-generation", model="Pravesh390/flan-t5-qlora-countryqa-v1") pipe("What is the capital of Brazil?") ``` ## 📈 Intended Use - Evaluate hallucinations in QA systems - Robust model development for real-world QA - Academic research or education ## 🏷️ License Apache 2.0 — Free for research and commercial use.