luodian commited on
Commit
132ba68
1 Parent(s): 2eefe4c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -7
README.md CHANGED
@@ -7,8 +7,6 @@ sdk: static
7
  pinned: false
8
  ---
9
 
10
- ---
11
-
12
  - **[2024-10]** 🔥🔥 We present **`LLaVA-Critic`**, the first open-source large multimodal model as a generalist evaluator for assessing LMM-generated responses across diverse multimodal tasks and scenarios.
13
 
14
  [GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://llava-vl.github.io/blog/2024-10-03-llava-critic/)
@@ -20,11 +18,6 @@ pinned: false
20
  - **[2024-08]** 🤞🤞 We present **`LLaVA-OneVision`**, a family of LMMs developed by consolidating insights into data, models, and visual representations.
21
 
22
  [GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://llava-vl.github.io/blog/2024-08-05-llava-onevision/)
23
-
24
- ---
25
-
26
- <details>
27
- <summary>Older Updates (2024-06 and earlier)</summary>
28
 
29
  - **[2024-06]** 🧑‍🎨🧑‍🎨 We release **`LLaVA-NeXT-Interleave`**, an LMM extending capabilities to real-world settings: Multi-image, Multi-frame (videos), Multi-view (3D), and Multi-patch (single-image).
30
 
@@ -34,6 +27,9 @@ pinned: false
34
 
35
  [GitHub](https://github.com/EvolvingLMMs-Lab/LongVA) | [Blog](https://lmms-lab.github.io/posts/longva/)
36
 
 
 
 
37
  - **[2024-06]** 🎬🎬 The **`lmms-eval/v0.2`** toolkit now supports video evaluations for models like LLaVA-NeXT Video and Gemini 1.5 Pro.
38
 
39
  [GitHub](https://github.com/EvolvingLMMs-Lab/lmms-eval) | [Blog](https://lmms-lab.github.io/posts/lmms-eval-0.2/)
 
7
  pinned: false
8
  ---
9
 
 
 
10
  - **[2024-10]** 🔥🔥 We present **`LLaVA-Critic`**, the first open-source large multimodal model as a generalist evaluator for assessing LMM-generated responses across diverse multimodal tasks and scenarios.
11
 
12
  [GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://llava-vl.github.io/blog/2024-10-03-llava-critic/)
 
18
  - **[2024-08]** 🤞🤞 We present **`LLaVA-OneVision`**, a family of LMMs developed by consolidating insights into data, models, and visual representations.
19
 
20
  [GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://llava-vl.github.io/blog/2024-08-05-llava-onevision/)
 
 
 
 
 
21
 
22
  - **[2024-06]** 🧑‍🎨🧑‍🎨 We release **`LLaVA-NeXT-Interleave`**, an LMM extending capabilities to real-world settings: Multi-image, Multi-frame (videos), Multi-view (3D), and Multi-patch (single-image).
23
 
 
27
 
28
  [GitHub](https://github.com/EvolvingLMMs-Lab/LongVA) | [Blog](https://lmms-lab.github.io/posts/longva/)
29
 
30
+ <details>
31
+ <summary>Older Updates (2024-06 and earlier)</summary>
32
+
33
  - **[2024-06]** 🎬🎬 The **`lmms-eval/v0.2`** toolkit now supports video evaluations for models like LLaVA-NeXT Video and Gemini 1.5 Pro.
34
 
35
  [GitHub](https://github.com/EvolvingLMMs-Lab/lmms-eval) | [Blog](https://lmms-lab.github.io/posts/lmms-eval-0.2/)