--- license: afl-3.0 language: - pt pipeline_tag: text2text-generation --- # Model Card for Model ID This model is intended to be used generating questions and answers from brazilian portuguese text passages, so you can finetune another BERT model into your generated triples (context-question-answer) for extractive question answering without supervision or labeled data. It was trained using [unicamp-dl/ptt5-base-t5-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-t5-portuguese-vocab) base model and [Squad 1.1 portuguese version](https://huggingface.co/datasets/ArthurBaia/squad_v1_pt_br) dataset to generante question and answers from text passages. ### Model Description - **Developed by:** Vitor Alcantara Batista (vabatista@gmail.com) - **Model type:** T5 base - **Language(s) (NLP):** Brazilian Portuguese - **License:** [Academic Free License v. 3.0](https://opensource.org/license/afl-3-0-php/) - **Finetuned from model :** unicamp-dl/ptt5-base-t5-vocab ### Model Sources [optional] - **Repository:** This model used code from this github repo [https://github.com/patil-suraj/question_generation/](https://github.com/patil-suraj/question_generation/) ## Usage How to use it (after cloning the github repo above): ``` from pipelines import pipeline nlp = pipeline("multitask-qa-qg", model='vabatista/question-generation-t5-pt-br', tokenizer='vabatista/question-generation-t5-pt-br') text = """ PUT YOUR TEXT PASSAGE HERE """ nlp(text) ``` Sample usage/results: ![sample_results.png](sample_results.png) ## Training Details TODO ## Model Card Authors Vitor Alcantara Batista ## Model Card Contact vabatista@gmail.com