metadata
license: apache-2.0
language:
- en
pipeline_tag: depth-estimation
tags:
- depth estimation
- latent consistency model
- image analysis
- computer vision
- in-the-wild
- zero-shot
Marigold Depth LCM v1-0 Model Card
This model is deprecated. Use the new Marigold Depth v1-1 Model instead.
NEW: Marigold Depth v1-1 Model
This is a model card for the marigold-depth-lcm-v1-0
model for monocular depth estimation from a single image.
The model is fine-tuned from the marigold-depth-v1-0
model
using the latent consistency distillation method, as described in
a follow-up of our CVPR'2024 paper titled "Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation".
- Play with the interactive Hugging Face Spaces demo: check out how the model works with example images or upload your own.
- Use it with diffusers to compute the results with a few lines of code.
- Get to the bottom of things with our official codebase.
Model Details
- Developed by: Bingxin Ke, Kevin Qu, Tianfu Wang, Nando Metzger, Shengyu Huang, Bo Li, Anton Obukhov, Konrad Schindler.
- Model type: Generative latent diffusion-based affine-invariant monocular depth estimation from a single image.
- Language: English.
- License: Apache License License Version 2.0.
- Model Description: This model can be used to generate an estimated depth map of an input image.
- Resolution: Even though any resolution can be processed, the model inherits the base diffusion model's effective resolution of roughly 768 pixels. This means that for optimal predictions, any larger input image should be resized to make the longer side 768 pixels before feeding it into the model.
- Steps and scheduler: This model was designed for usage with the LCM scheduler and between 1 and 4 denoising steps.
- Outputs:
- Affine-invariant depth map: The predicted values are between 0 and 1, interpolating between the near and far planes of the model's choice.
- Uncertainty map: Produced only when multiple predictions are ensembled with ensemble size larger than 2.
- Resources for more information: Project Website, Paper, Code.
- Cite as:
Placeholder for the citation block of the follow-up paper
@InProceedings{ke2023repurposing,
title={Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation},
author={Bingxin Ke and Anton Obukhov and Shengyu Huang and Nando Metzger and Rodrigo Caye Daudt and Konrad Schindler},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}