ZhengPeng7
commited on
Commit
•
02eb783
1
Parent(s):
dd9a213
Update README to lite version.
Browse files
README.md
CHANGED
@@ -3,9 +3,7 @@ library_name: BiRefNet
|
|
3 |
tags:
|
4 |
- background-removal
|
5 |
- mask-generation
|
6 |
-
-
|
7 |
-
- Camouflaged Object Detection
|
8 |
-
- Salient Object Detection
|
9 |
- pytorch_model_hub_mixin
|
10 |
- model_hub_mixin
|
11 |
repo_url: https://github.com/ZhengPeng7/BiRefNet-portrait
|
@@ -34,132 +32,37 @@ pipeline_tag: image-segmentation
|
|
34 |
<a href='https://drive.google.com/drive/folders/1s2Xe0cjq-2ctnJBR24563yMSCOu4CcxM'><img src='https://img.shields.io/badge/Drive-Stuff-green'></a> 
|
35 |
<a href='LICENSE'><img src='https://img.shields.io/badge/License-MIT-yellow'></a> 
|
36 |
<a href='https://huggingface.co/spaces/ZhengPeng7/BiRefNet_demo'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HF%20Spaces-BiRefNet-blue'></a> 
|
37 |
-
<a href='https://huggingface.co/ZhengPeng7/BiRefNet
|
38 |
<a href='https://colab.research.google.com/drive/14Dqg7oeBkFEtchaHLNpig2BcdkZEogba?usp=drive_link'><img src='https://img.shields.io/badge/Single_Image_Inference-F9AB00?style=for-the-badge&logo=googlecolab&color=525252'></a> 
|
39 |
<a href='https://colab.research.google.com/drive/1MaEiBfJ4xIaZZn0DqKrhydHB8X97hNXl#scrollTo=DJ4meUYjia6S'><img src='https://img.shields.io/badge/Inference_&_Evaluation-F9AB00?style=for-the-badge&logo=googlecolab&color=525252'></a> 
|
40 |
</div>
|
41 |
|
42 |
|
43 |
-
|
44 |
-
| :------------------------------: | :-------------------------------: |
|
45 |
-
| <img src="https://drive.google.com/thumbnail?id=1ItXaA26iYnE8XQ_GgNLy71MOWePoS2-g&sz=w400" /> | <img src="https://drive.google.com/thumbnail?id=1Z-esCujQF_uEa_YJjkibc3NUrW4aR_d4&sz=w400" /> |
|
46 |
|
47 |
-
|
|
|
|
|
48 |
|
49 |
-
Visit our GitHub repo: [https://github.com/ZhengPeng7/BiRefNet](https://github.com/ZhengPeng7/BiRefNet) for more details -- **codes**, **docs**, and **model zoo**!
|
50 |
|
51 |
-
|
|
|
52 |
|
53 |
-
###
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
### 1. Load BiRefNet:
|
59 |
-
|
60 |
-
#### Use codes + weights from HuggingFace
|
61 |
-
> Only use the weights on HuggingFace -- Pro: No need to download BiRefNet codes manually; Con: Codes on HuggingFace might not be latest version (I'll try to keep them always latest).
|
62 |
-
|
63 |
-
```python
|
64 |
-
# Load BiRefNet with weights
|
65 |
-
from transformers import AutoModelForImageSegmentation
|
66 |
-
birefnet = AutoModelForImageSegmentation.from_pretrained('zhengpeng7/BiRefNet-portrait', trust_remote_code=True)
|
67 |
-
```
|
68 |
-
|
69 |
-
#### Use codes from GitHub + weights from HuggingFace
|
70 |
-
> Only use the weights on HuggingFace -- Pro: codes are always latest; Con: Need to clone the BiRefNet repo from my GitHub.
|
71 |
-
|
72 |
-
```shell
|
73 |
-
# Download codes
|
74 |
-
git clone https://github.com/ZhengPeng7/BiRefNet.git
|
75 |
-
cd BiRefNet
|
76 |
-
```
|
77 |
-
|
78 |
-
```python
|
79 |
-
# Use codes locally
|
80 |
-
from models.birefnet import BiRefNet
|
81 |
-
|
82 |
-
# Load weights from Hugging Face Models
|
83 |
-
birefnet = BiRefNet.from_pretrained('zhengpeng7/BiRefNet-portrait')
|
84 |
-
```
|
85 |
-
|
86 |
-
#### Use codes from GitHub + weights from local space
|
87 |
-
> Only use the weights and codes both locally.
|
88 |
-
|
89 |
-
```python
|
90 |
-
# Use codes and weights locally
|
91 |
-
import torch
|
92 |
-
from utils import check_state_dict
|
93 |
-
|
94 |
-
birefnet = BiRefNet(bb_pretrained=False)
|
95 |
-
state_dict = torch.load(PATH_TO_WEIGHT, map_location='cpu')
|
96 |
-
state_dict = check_state_dict(state_dict)
|
97 |
-
birefnet.load_state_dict(state_dict)
|
98 |
-
```
|
99 |
-
|
100 |
-
#### Use the loaded BiRefNet for inference
|
101 |
-
```python
|
102 |
-
# Imports
|
103 |
-
from PIL import Image
|
104 |
-
import matplotlib.pyplot as plt
|
105 |
-
import torch
|
106 |
-
from torchvision import transforms
|
107 |
-
from models.birefnet import BiRefNet
|
108 |
-
|
109 |
-
birefnet = ... # -- BiRefNet should be loaded with codes above, either way.
|
110 |
-
torch.set_float32_matmul_precision(['high', 'highest'][0])
|
111 |
-
birefnet.to('cuda')
|
112 |
-
birefnet.eval()
|
113 |
-
|
114 |
-
def extract_object(birefnet, imagepath):
|
115 |
-
# Data settings
|
116 |
-
image_size = (1024, 1024)
|
117 |
-
transform_image = transforms.Compose([
|
118 |
-
transforms.Resize(image_size),
|
119 |
-
transforms.ToTensor(),
|
120 |
-
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
|
121 |
-
])
|
122 |
-
|
123 |
-
image = Image.open(imagepath)
|
124 |
-
input_images = transform_image(image).unsqueeze(0).to('cuda')
|
125 |
-
|
126 |
-
# Prediction
|
127 |
-
with torch.no_grad():
|
128 |
-
preds = birefnet(input_images)[-1].sigmoid().cpu()
|
129 |
-
pred = preds[0].squeeze()
|
130 |
-
pred_pil = transforms.ToPILImage()(pred)
|
131 |
-
mask = pred_pil.resize(image.size)
|
132 |
-
image.putalpha(mask)
|
133 |
-
return image, mask
|
134 |
-
|
135 |
-
# Visualization
|
136 |
-
plt.axis("off")
|
137 |
-
plt.imshow(extract_object(birefnet, imagepath='PATH-TO-YOUR_IMAGE.jpg')[0])
|
138 |
-
plt.show()
|
139 |
-
|
140 |
-
```
|
141 |
-
|
142 |
-
|
143 |
-
> This BiRefNet for standard dichotomous image segmentation (DIS) is trained on **DIS-TR** and validated on **DIS-TEs and DIS-VD**.
|
144 |
-
|
145 |
-
## This repo holds the official model weights of "[<ins>Bilateral Reference for High-Resolution Dichotomous Image Segmentation</ins>](https://arxiv.org/pdf/2401.03407)" (_CAAI AIR 2024_).
|
146 |
-
|
147 |
-
This repo contains the weights of BiRefNet proposed in our paper, which has achieved the SOTA performance on three tasks (DIS, HRSOD, and COD).
|
148 |
-
|
149 |
-
Go to my GitHub page for BiRefNet codes and the latest updates: https://github.com/ZhengPeng7/BiRefNet :)
|
150 |
|
151 |
|
152 |
-
|
|
|
153 |
|
154 |
-
|
155 |
-
|
156 |
-
+ **Inference and evaluation** of your given weights: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1MaEiBfJ4xIaZZn0DqKrhydHB8X97hNXl#scrollTo=DJ4meUYjia6S)
|
157 |
-
<img src="https://drive.google.com/thumbnail?id=12XmDhKtO1o2fEvBu4OE4ULVB2BK0ecWi&sz=w1080" />
|
158 |
|
159 |
## Acknowledgement:
|
160 |
|
161 |
-
+ Many thanks to @fal for their generous support on GPU resources for training
|
162 |
-
+ Many thanks to @not-lain for his help on the better deployment of our BiRefNet model on HuggingFace.
|
163 |
|
164 |
|
165 |
## Citation
|
|
|
3 |
tags:
|
4 |
- background-removal
|
5 |
- mask-generation
|
6 |
+
- Image Matting
|
|
|
|
|
7 |
- pytorch_model_hub_mixin
|
8 |
- model_hub_mixin
|
9 |
repo_url: https://github.com/ZhengPeng7/BiRefNet-portrait
|
|
|
32 |
<a href='https://drive.google.com/drive/folders/1s2Xe0cjq-2ctnJBR24563yMSCOu4CcxM'><img src='https://img.shields.io/badge/Drive-Stuff-green'></a> 
|
33 |
<a href='LICENSE'><img src='https://img.shields.io/badge/License-MIT-yellow'></a> 
|
34 |
<a href='https://huggingface.co/spaces/ZhengPeng7/BiRefNet_demo'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HF%20Spaces-BiRefNet-blue'></a> 
|
35 |
+
<a href='https://huggingface.co/ZhengPeng7/BiRefNet'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HF%20Models-BiRefNet-blue'></a> 
|
36 |
<a href='https://colab.research.google.com/drive/14Dqg7oeBkFEtchaHLNpig2BcdkZEogba?usp=drive_link'><img src='https://img.shields.io/badge/Single_Image_Inference-F9AB00?style=for-the-badge&logo=googlecolab&color=525252'></a> 
|
37 |
<a href='https://colab.research.google.com/drive/1MaEiBfJ4xIaZZn0DqKrhydHB8X97hNXl#scrollTo=DJ4meUYjia6S'><img src='https://img.shields.io/badge/Inference_&_Evaluation-F9AB00?style=for-the-badge&logo=googlecolab&color=525252'></a> 
|
38 |
</div>
|
39 |
|
40 |
|
41 |
+
## This repo holds the official weights of BiRefNet for general matting.
|
|
|
|
|
42 |
|
43 |
+
### Training Sets:
|
44 |
+
+ P3M-10k (except TE-P3M-500-P)
|
45 |
+
+ [TR-humans](https://huggingface.co/datasets/schirrmacher/humans)
|
46 |
|
|
|
47 |
|
48 |
+
### Validation Sets:
|
49 |
+
+ TE-P3M-500-P
|
50 |
|
51 |
+
### Performance:
|
52 |
+
| Dataset | Method | Smeasure | maxFm | meanEm | MAE | maxEm | meanFm | wFmeasure | adpEm | adpFm | HCE |
|
53 |
+
| :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: |
|
54 |
+
| TE-P3M-500-P | BiRefNet-portrai--epoch_150 | .983 | .996 | .991 | .006 | .997 | .988 | .990 | .933 | .965 | .000 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
|
56 |
|
57 |
+
**Check the main BiRefNet model repo for more info and how to use it:**
|
58 |
+
https://huggingface.co/ZhengPeng7/BiRefNet/blob/main/README.md
|
59 |
|
60 |
+
**Also check the GitHub repo of BiRefNet for all things you may want:**
|
61 |
+
https://github.com/ZhengPeng7/BiRefNet
|
|
|
|
|
62 |
|
63 |
## Acknowledgement:
|
64 |
|
65 |
+
+ Many thanks to @fal for their generous support on GPU resources for training this BiRefNet for portrait matting.
|
|
|
66 |
|
67 |
|
68 |
## Citation
|