taohu commited on
Commit
ebbe3d0
·
verified ·
1 Parent(s): 56d7d4f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -11
README.md CHANGED
@@ -16,22 +16,25 @@ tags:
16
 
17
  This model represents the official checkpoint of the paper titled "ZigMa: Zigzag Mamba Diffusion Model".
18
 
19
- [![Website](doc/badges/badge-website.svg)](https://https://taohu.me/project_zigma)
20
- [![GitHub](https://img.shields.io/github/stars/prs-eth/Marigold?style=default&label=GitHub%20★&logo=github)](https://github.com/dongzhuoyao/zigma)
21
- [![Paper](doc/badges/badge-pdf.svg)](https://arxiv.orgg)
 
22
  [![License](https://img.shields.io/badge/License-Apache--2.0-929292)](https://www.apache.org/licenses/LICENSE-2.0)
23
 
24
 
25
- [Bingxin Ke](http://www.kebingxin.com/),
26
- [Anton Obukhov](https://www.obukhov.ai/),
27
- [Shengyu Huang](https://shengyuh.github.io/),
28
- [Nando Metzger](https://nandometzger.github.io/),
29
- [Rodrigo Caye Daudt](https://rcdaudt.github.io/),
30
- [Konrad Schindler](https://scholar.google.com/citations?user=FZuNgqIAAAAJ&hl=en )
 
 
 
31
 
32
- We present Marigold, a diffusion model and associated fine-tuning protocol for monocular depth estimation. Its core principle is to leverage the rich visual knowledge stored in modern generative image models. Our model, derived from Stable Diffusion and fine-tuned with synthetic data, can zero-shot transfer to unseen data, offering state-of-the-art monocular depth estimation results.
33
 
34
- ![teaser](doc/teaser_collage_transparant.png)
35
 
36
 
37
  ## 🎓 Citation
 
16
 
17
  This model represents the official checkpoint of the paper titled "ZigMa: Zigzag Mamba Diffusion Model".
18
 
19
+
20
+ [![Website](doc/badges/badge-website.svg)](https://taohu.me/project_zigma)
21
+ [![Paper](https://img.shields.io/badge/arXiv-PDF-b31b1b)](https://arxiv.orgg)
22
+ [![Hugging Face Model](https://img.shields.io/badge/🤗%20Hugging%20Face-Model-green)](https://huggingface.co/Bingxin/Marigold)
23
  [![License](https://img.shields.io/badge/License-Apache--2.0-929292)](https://www.apache.org/licenses/LICENSE-2.0)
24
 
25
 
26
+ [Vincent Tao Hu](http://taohu.me),
27
+ [tefan Andreas Baumann](https://scholar.google.de/citations?user=egzbdnoAAAAJ&hl=en),
28
+ [Ming Gui](https://www.linkedin.com/in/ming-gui-87b76a16b/?originalSubdomain=de),
29
+ [Olga Grebenkova](https://www.linkedin.com/in/grebenkovao/),
30
+ [Pingchuan Ma](https://www.linkedin.com/in/pingchuan-ma-492543156/),
31
+ [Johannes Fischer](https://www.linkedin.com/in/js-fischer/ )
32
+ [Bjorn Ommer](https://ommer-lab.com/people/ommer/ )
33
+
34
+ We present ZigMa, a scanning scheme that follows a zigzag pattern, considering both spatial continuity and parameter efficiency. We further adapt this scheme to video, separating the reasoning between spatial and temporal dimensions, thus achieving efficient parameter utilization. Our design allows for greater incorporation of inductive bias for non-1D data and improves parameter efficiency in diffusion models.
35
 
36
+ ![teaser](doc/teaser_3col.png)
37
 
 
38
 
39
 
40
  ## 🎓 Citation