michaelryoo commited on
Commit
95bda20
·
verified ·
1 Parent(s): 85b61b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +96 -3
README.md CHANGED
@@ -1,3 +1,96 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: cc-by-nc-4.0
5
+ pipeline_tag: image-text-to-text
6
+ ---
7
+
8
+ # Model description
9
+ `xGen-MM-Vid (BLIP-3-Video)` is an efficient compact vision-language model (VLM) with an explicit temporal encoder, specifically designed to understand videos. It is developed by Salesforce AI Research. Incorporation of a learanable temporal encoder modules within the original (image-based) BLIP-3 architecture is its key aspect.
10
+
11
+ Here, we are sharing the 32 token version trained to take 8-frame video inputs. In principle, it is able to take any number of frames, but it was trained with 8-frame videos.
12
+
13
+ The 128 token version of the same model could be found at: [BLIP-3-Video 128 token model](https://huggingface.co/Salesforce/xgen-mm-vid-phi3-mini-r-v1.5-128tokens-8frames/).
14
+
15
+ For more details, check out our [tech report](https://arxiv.org/pdf/2410.16267). More detailed explanation could also be found in the [blog article](https://www.salesforceairesearch.com/opensource/xGen-MM-Vid/index.html).
16
+
17
+
18
+ # Results
19
+
20
+ ### Tokens vs. accuracy
21
+
22
+ <p>
23
+ <figure style="max-width: 480px; margin: 0 auto;">
24
+ <a href="https://www.salesforceairesearch.com/opensource/xGen-MM-Vid/figures/tokens-vs-accuracy.png"><img src="https://www.salesforceairesearch.com/opensource/xGen-MM-Vid/figures/tokens-vs-accuracy.png"></a>
25
+ </figure>
26
+ </p>
27
+ The above figure shows the number of visual tokens vs. accuracy trade-off of various video models including xGen-MM-Vid (BLIP-3-Video) on the MSVD-QA dataset.
28
+
29
+
30
+ ### Examples
31
+
32
+ <p>
33
+ <figure style="max-width: 480px; margin: 0 auto;">
34
+ <video style="max-width:100%;width:480px" autoplay muted controls loop>
35
+ <source src="https://www.salesforceairesearch.com/opensource/xGen-MM-Vid/figures/xgen-mm-vid1.mp4" type="video/mp4">
36
+ Your browser does not support the video tag.
37
+ </video>
38
+ </figure>
39
+ </p>
40
+
41
+ <p>
42
+ <figure style="max-width: 480px; margin: 0 auto;">
43
+ <video style="max-width:100%;width:480px" autoplay muted controls loop>
44
+ <source src="https://www.salesforceairesearch.com/opensource/xGen-MM-Vid/figures/xgen-mm-vid2.mp4" type="video/mp4">
45
+ Your browser does not support the video tag.
46
+ </video>
47
+ </figure>
48
+ </p>
49
+
50
+
51
+ # How to use
52
+
53
+ Please check out our [inference script](xgen-mm-vid-inference-script_hf.py) as an example to use our model. This codebase is based on the [xGen-MM](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-interleave-r-v1.5).
54
+
55
+
56
+
57
+ # Bias, Risks, Limitations, and Ethical Considerations
58
+ The main data sources are from the internet, including webpages, video stock sites, and curated datasets released by the research community.
59
+ The model may be subject to bias from the original data source, as well as bias from LLMs and commercial APIs.
60
+ We strongly recommend users assess safety and fairness before applying to downstream applications.
61
+
62
+
63
+
64
+ # License
65
+
66
+ Our code and weights are released under the [CC by-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/deed.en) license.
67
+
68
+
69
+ # Code acknowledgment
70
+ Our code/model is built on top of [xGen-MM](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-interleave-r-v1.5).
71
+
72
+
73
+ # Citation
74
+ ```
75
+ @misc{blip3video-xgenmmvid,
76
+ author = {Michael S. Ryoo and Honglu Zhou and Shrikant Kendre and Can Qin and Le Xue and Manli Shu and Silvio Savarese and Ran Xu and Caiming Xiong and Juan Carlos Niebles},
77
+ title = {xGen-MM-Vid (BLIP-3-Video): You Only Need 32 Tokens to Represent a Video Even in VLMs},
78
+ year = {2024},
79
+ eprint = {2410.16267},
80
+ archivePrefix = {arXiv},
81
+ primaryClass = {cs.CV},
82
+ url = {https://arxiv.org/abs/2410.16267},
83
+ }
84
+ ```
85
+
86
+ # Troubleshoot
87
+
88
+ 1. If you missed any packages, please consider the following
89
+
90
+ ```
91
+ pip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 --index-url https://download.pytorch.org/whl/cu121
92
+ pip install open_clip_torch==2.24.0
93
+ pip install einops
94
+ pip install einops-exts
95
+ pip install transformers==4.41.1
96
+ ```