Basic Visual Question Answering model
The model can be trained by using the train.ipynb
notebook and using the dataset mentioned in the notebook.
To inference the pretrained model and ask questions with an image, use the ask_questions.ipynb
notebook.
Evals:
'eval_loss': 3.4556491374969482,
'eval_wups': 0.3071833106722724,
'eval_acc': 0.2610264635124298,
'eval_f1': 0.03549650549105638,
'eval_runtime': 39.9818,
'eval_samples_per_second': 62.378,
'eval_steps_per_second': 0.975,
'epoch': 10.0
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for sudiphalder/vqa_base
Base model
google-bert/bert-base-uncased