FoB

arXiv Paper Code

Model checkpoints for paper: Focus on the Background: Exploring SAM’s Potential in Few-shot Medical Image Segmentation With Background-centric Prompting

  • [News!] 2025-07-23: We have uploaded the full code.
  • [News!] 2026-02-21: Our work is accepted by CVPR 2026! πŸŽ‰

πŸ“‹ Abstract

Click to expand

Conventional few-shot medical image segmentation (FSMIS) approaches face performance bottlenecks that hinder broader clinical applicability. Although the Segment Anything Model (SAM) exhibits strong category-agnostic segmentation capabilities, its direct application to medical images often leads to over-segmentation due to ambiguous anatomical boundaries. In this paper, we reformulate SAM-based FSMIS as a prompt localization task and propose FoB (Focus on Background), a background-centric prompt generator that provides accurate background prompts to constrain SAM's over-segmentation. Specifically, FoB bridges the gap between segmentation and prompt localization by category-agnostic generation of support background prompts and localizing them directly in the query image. To address the challenge of prompt localization for novel categories, FoB models rich contextual information to capture foreground-background spatial dependencies. Moreover, inspired by the inherent structural patterns of background prompts in medical images, FoB models this structure as a constraint to progressively refine background prompt predictions. Experiments on three diverse medical image datasets demonstrate that FoB outperforms other baselines by large margins, achieving state-of-the-art performance on FSMIS, and exhibiting strong cross-domain generalization.

πŸ€” Motivation

We observe that SAM often suffers from over-segmentation when applied to medical images. We find that incorporating accurate background prompts can effectively constrain this issue. However, prior methods only focus on generating accurate foreground prompts, leaving the role of background prompts largely underexplored. We aim to bridge this gap.

πŸ₯° Acknowledgement

Our implementation is based on the works: ALPNet, ADNet, segment-anything, and ProtoSAM. Thanks to their excellent works!

πŸ“ Citation

If you use this code for your research or project, please consider citing our paper. Thanks!πŸ₯‚:

@inproceedings{bo2026fob,
  title={Focus on Background: Exploring SAM’s Potential in Few-shot Medical Image Segmentation with Background-centric Prompting},
  author={Bo, Yuntian and Zhu, Yazhou and Koniusz, Piotr and Zhang, Haofeng},
  booktitle={2026 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2026},
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Paper for PrimeBo1/FoB_SAM