How to use this model
#2
by
cenahwang
- opened
Hi, I'm doubt how to inference this model? thx
Hi, i wonder this too, did you find a solution?
from transformers import SegformerImageProcessor, AutoModelForSemanticSegmentation
from PIL import Image
import requests
import matplotlib.pyplot as plt
import torch.nn as nn
import numpy as np
processor = SegformerImageProcessor.from_pretrained("jonathandinu/face-parsing")
model = AutoModelForSemanticSegmentation.from_pretrained("jonathandinu/face-parsing")
url = "https://plus.unsplash.com/premium_photo-1673210886161-bfcc40f54d1f?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8cGVyc29uJTIwc3RhbmRpbmd8ZW58MHx8MHx8&w=1000&q=80"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits.cpu()
upsampled_logits = nn.functional.interpolate(
logits,
size=image.size[::-1],
mode="bilinear",
align_corners=False,
)
pred_seg = upsampled_logits.argmax(dim=1)[0]
pred_seg_np = pred_seg.detach().numpy()
plt.imshow(pred_seg_np)
plt.axis('off')
plt.show()
Thank you very much. I am able to get the output.
I am wondering that how to get segmentation parts seperately with their names as running on Huggingface.
Such as left eye , nose etc.
@Senem
check config.json in files, there you can find labels definition in id2label
prop.
Then just filter output by specific number.
finally got around to adding a proper README, thanks for bearing with me and the examples/help here π
jonathandinu
changed discussion status to
closed