Update README.md
Browse files
README.md
CHANGED
@@ -26,6 +26,8 @@ You can run BakLLaVA-1 on our repo. We are currently updating it to make it easi
|
|
26 |
First, make sure to have `transformers >= 4.35.3`.
|
27 |
The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template (`USER: xxx\nASSISTANT:`) and add the token `<image>` to the location where you want to query images:
|
28 |
|
|
|
|
|
29 |
### Using `pipeline`:
|
30 |
|
31 |
|
|
|
26 |
First, make sure to have `transformers >= 4.35.3`.
|
27 |
The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template (`USER: xxx\nASSISTANT:`) and add the token `<image>` to the location where you want to query images:
|
28 |
|
29 |
+
Check out also the Google Colab demo to run Llava on a free-tier Google Colab instance: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1qsl6cd2c8gGtEW1xV5io7S8NHh-Cp1TV?usp=sharing)
|
30 |
+
|
31 |
### Using `pipeline`:
|
32 |
|
33 |
|