jadechoghari commited on
Commit
40136d3
1 Parent(s): eeb4f75

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -20
README.md CHANGED
@@ -14,18 +14,11 @@ Official repository for RobustSAM: Segment Anything Robustly on Degraded Images
14
  [Project Page](https://robustsam.github.io/) | [Paper](https://arxiv.org/abs/2406.09627) | [Video](https://www.youtube.com/watch?v=Awukqkbs6zM) | [Dataset](https://huggingface.co/robustsam/robustsam/tree/main/dataset)
15
 
16
 
17
- ## Updates
18
- - July 2024: ✨ Training code, data and model checkpoints for different ViT backbones are released!
19
- - June 2024: ✨ Inference code has been released!
20
- - Feb 2024: ✨ RobustSAM was accepted into CVPR 2024!
21
-
22
-
23
  ## Introduction
24
  Segment Anything Model (SAM) has emerged as a transformative approach in image segmentation, acclaimed for its robust zero-shot segmentation capabilities and flexible prompting system. Nonetheless, its performance is challenged by images with degraded quality. Addressing this limitation, we propose the Robust Segment Anything Model (RobustSAM), which enhances SAM's performance on low-quality images while preserving its promptability and zero-shot generalization.
25
 
26
  Our method leverages the pre-trained SAM model with only marginal parameter increments and computational requirements. The additional parameters of RobustSAM can be optimized within 30 hours on eight GPUs, demonstrating its feasibility and practicality for typical research laboratories. We also introduce the Robust-Seg dataset, a collection of 688K image-mask pairs with different degradations designed to train and evaluate our model optimally. Extensive experiments across various segmentation tasks and datasets confirm RobustSAM's superior performance, especially under zero-shot conditions, underscoring its potential for extensive real-world application. Additionally, our method has been shown to effectively improve the performance of SAM-based downstream tasks such as single image dehazing and deblurring.
27
 
28
- <img width="1096" alt="image" src="figures/architecture.jpg">
29
 
30
  **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the original [SAM model card](https://github.com/facebookresearch/segment-anything).
31
 
@@ -121,9 +114,6 @@ plt.axis("off")
121
  plt.show()
122
  ```
123
 
124
- ## Comparison of computational requirements
125
- <img width="720" alt="image" src='figures/Computational requirements.PNG'>
126
-
127
  ## Visual Comparison
128
  <table>
129
  <tr>
@@ -146,16 +136,6 @@ plt.show()
146
 
147
  <img width="1096" alt="image" src='figures/qualitative_result.PNG'>
148
 
149
- ## Quantitative Comparison
150
- ### Seen dataset with synthetic degradation
151
- <img width="720" alt="image" src='figures/seen_dataset_with_synthetic_degradation.PNG'>
152
-
153
- ### Unseen dataset with synthetic degradation
154
- <img width="720" alt="image" src='figures/unseen_dataset_with_synthetic_degradation.PNG'>
155
-
156
- ### Unseen dataset with real degradation
157
- <img width="600" alt="image" src='figures/unseen_dataset_with_real_degradation.PNG'>
158
-
159
  ## Reference
160
  If you find this work useful, please consider citing us!
161
  ```python
 
14
  [Project Page](https://robustsam.github.io/) | [Paper](https://arxiv.org/abs/2406.09627) | [Video](https://www.youtube.com/watch?v=Awukqkbs6zM) | [Dataset](https://huggingface.co/robustsam/robustsam/tree/main/dataset)
15
 
16
 
 
 
 
 
 
 
17
  ## Introduction
18
  Segment Anything Model (SAM) has emerged as a transformative approach in image segmentation, acclaimed for its robust zero-shot segmentation capabilities and flexible prompting system. Nonetheless, its performance is challenged by images with degraded quality. Addressing this limitation, we propose the Robust Segment Anything Model (RobustSAM), which enhances SAM's performance on low-quality images while preserving its promptability and zero-shot generalization.
19
 
20
  Our method leverages the pre-trained SAM model with only marginal parameter increments and computational requirements. The additional parameters of RobustSAM can be optimized within 30 hours on eight GPUs, demonstrating its feasibility and practicality for typical research laboratories. We also introduce the Robust-Seg dataset, a collection of 688K image-mask pairs with different degradations designed to train and evaluate our model optimally. Extensive experiments across various segmentation tasks and datasets confirm RobustSAM's superior performance, especially under zero-shot conditions, underscoring its potential for extensive real-world application. Additionally, our method has been shown to effectively improve the performance of SAM-based downstream tasks such as single image dehazing and deblurring.
21
 
 
22
 
23
  **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the original [SAM model card](https://github.com/facebookresearch/segment-anything).
24
 
 
114
  plt.show()
115
  ```
116
 
 
 
 
117
  ## Visual Comparison
118
  <table>
119
  <tr>
 
136
 
137
  <img width="1096" alt="image" src='figures/qualitative_result.PNG'>
138
 
 
 
 
 
 
 
 
 
 
 
139
  ## Reference
140
  If you find this work useful, please consider citing us!
141
  ```python