Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,10 @@ sdk: static
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
|
|
|
|
|
|
|
|
10 |
- **[2024-10]** 🔥🔥 We present **`LLaVA-Critic`**, the first open-source large multimodal model as a generalist evaluator for assessing LMM-generated responses across diverse multimodal tasks and scenarios.
|
11 |
|
12 |
[GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://llava-vl.github.io/blog/2024-10-03-llava-critic/)
|
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
+
- **[2024-11]** 🔥🔥 We introduce **Multimodal SAE**, the first framework designed to interpret learned features in large-scale multimodal models using Sparse Autoencoders. Through our approach, we leverage LLaVA-OneVision-72B to analyze and explain the SAE-derived features of LLaVA-NeXT-LLaMA3-8B. Furthermore, we demonstrate the ability to steer model behavior by clamping specific features to alleviate hallucinations and avoid safety-related issues.
|
11 |
+
|
12 |
+
[GitHub](https://github.com/EvolvingLMMs-Lab/multimodal-sae)
|
13 |
+
|
14 |
- **[2024-10]** 🔥🔥 We present **`LLaVA-Critic`**, the first open-source large multimodal model as a generalist evaluator for assessing LMM-generated responses across diverse multimodal tasks and scenarios.
|
15 |
|
16 |
[GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://llava-vl.github.io/blog/2024-10-03-llava-critic/)
|