|
--- |
|
library_name: pruna-engine |
|
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" |
|
metrics: |
|
- memory_disk |
|
- memory_inference |
|
- inference_latency |
|
- inference_throughput |
|
- inference_CO2_emissions |
|
- inference_energy_consumption |
|
--- |
|
<!-- header start --> |
|
<!-- 200823 --> |
|
<div style="width: auto; margin-left: auto; margin-right: auto"> |
|
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> |
|
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> |
|
</a> |
|
</div> |
|
<!-- header end --> |
|
|
|
[![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) |
|
[![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) |
|
[![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) |
|
[![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.com/invite/vb6SmA3hxu) |
|
|
|
# Simply make AI models cheaper, smaller, faster, and greener! |
|
|
|
- Give a thumbs up if you like this model! |
|
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). |
|
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
|
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) |
|
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. |
|
|
|
**Frequently Asked Questions** |
|
- ***How does the compression work?*** The model is compressed by using bitsandbytes. |
|
- ***How does the model quality change?*** The quality of the model output will slightly degrade. |
|
- ***What is the model format?*** We the standard safetensors format. |
|
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
|
|
|
# Usage |
|
```python |
|
from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration |
|
import torch |
|
from PIL import Image |
|
import requests |
|
|
|
processor = LlavaNextProcessor.from_pretrained("PrunaAI/llava-v1.6-vicuna-7b-bnb-8bit") |
|
|
|
model = LlavaNextForConditionalGeneration.from_pretrained("PrunaAI/llava-v1.6-vicuna-7b-bnb-8bit") |
|
|
|
# prepare image and text prompt, using the appropriate prompt template |
|
url = "https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true" |
|
image = Image.open(requests.get(url, stream=True).raw) |
|
prompt = "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions. USER: <image>\nWhat is shown in this image? ASSISTANT:" |
|
|
|
|
|
inputs = processor(prompt, image, return_tensors="pt").to("cuda:0") |
|
|
|
# autoregressively complete prompt |
|
output = model.generate(**inputs, max_new_tokens=100) |
|
|
|
print(processor.decode(output[0], skip_special_tokens=True)) |
|
``` |
|
|
|
## Credits & License |
|
|
|
The license of the smashed model follows the license of the original model. Please check the license of the original model liuhaotian/llava-v1.6-vicuna-7b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. |
|
|
|
## Want to compress other models? |
|
|
|
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). |
|
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |