Model Card for HPT
Hyper-Pretrained Transformers (HPT) is a novel multimodal LLM framework from HyperGAI, and has been trained for vision-language models that are capable of multimodal understanding for both textual and visual inputs. HPT has achieved highly competitive results with state-of-the-art models on a variety of multimodal LLM benchmarks. This repository contains the open-source weight to reproduce the evaluation results of HPT Air on different benchmarks.
For full details of this model please read our technical blog post
Run the model
Please use the scripts available in our gitHub repository to utilize the model.
Troubleshooting
Please report the issue at our github repo
Pretrained models used
Pretrained LLM: Yi-6B-Chat
Pretrained Visual Encoder: clip-vit-large-patch14-336
Disclaimer and Responsible Use
Note that the HPT Air is a quick open release of our models to facilitate the open, responsible AI research and community development. It does not have any moderation mechanism and provides no guarantees on their results. We hope to engage with the community to make the model finely respect guardrails to allow adoptions in practical applications requiring moderated outputs.
Contact Us
- Contact: hpt@hypergai.com
- Follow us on Twitter.
- Follow us on Linkedin.
- Visit our website to learn more about us.
License
This project is released under the Apache 2.0 license. Parts of this project contain code and models from other sources, which are subject to their respective licenses and you need to apply their respective license if you may want to use for commercial purposes.
- Downloads last month
- 4