ZhangYuanhan
commited on
Commit
•
95291f9
1
Parent(s):
01ad5d5
Update README.md
Browse files
README.md
CHANGED
@@ -83,6 +83,17 @@ The primary intended users of the model are researchers and hobbyists in compute
|
|
83 |
|
84 |
### Citations
|
85 |
```bibtex
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
86 |
@misc{zhang2024llavanextvideo,
|
87 |
title={LLaVA-NeXT: A Strong Zero-shot Video Understanding Model},
|
88 |
url={https://llava-vl.github.io/blog/2024-04-30-llava-next-video/},
|
|
|
83 |
|
84 |
### Citations
|
85 |
```bibtex
|
86 |
+
|
87 |
+
@misc{zhang2024videoinstructiontuningsynthetic,
|
88 |
+
title={Video Instruction Tuning With Synthetic Data},
|
89 |
+
author={Yuanhan Zhang and Jinming Wu and Wei Li and Bo Li and Zejun Ma and Ziwei Liu and Chunyuan Li},
|
90 |
+
year={2024},
|
91 |
+
eprint={2410.02713},
|
92 |
+
archivePrefix={arXiv},
|
93 |
+
primaryClass={cs.CV},
|
94 |
+
url={https://arxiv.org/abs/2410.02713},
|
95 |
+
}
|
96 |
+
|
97 |
@misc{zhang2024llavanextvideo,
|
98 |
title={LLaVA-NeXT: A Strong Zero-shot Video Understanding Model},
|
99 |
url={https://llava-vl.github.io/blog/2024-04-30-llava-next-video/},
|