metadata
license: mit
library_name: transformers
tags:
- image-to-image
- lineart
inference: false
MangaLineExtraction-hf
The huggingface transformers
compatible version of MangaLineExtraction_PyTorch.
Original repo: https://github.com/ljsabc/MangaLineExtraction_PyTorch
Example
from PIL import Image
import torch
from transformers import AutoModel, AutoImageProcessor
REPO_NAME = "p1atdev/MangaLineExtraction-hf"
model = AutoModel.from_pretrained(REPO_NAME, trust_remote_code=True)
processor = AutoImageProcessor.from_pretrained(REPO_NAME, trust_remote_code=True)
image = Image.open("./sample.jpg")
inputs = processor(image, return_tensors="pt")
with torch.no_grad():
outputs = model(inputs.pixel_values)
line_image = Image.fromarray(outputs.pixel_values[0].numpy().astype("uint8"), mode="L")
line_image.save("./line_image.png")
sample.jpg |
Generated line image |
---|---|
Model Details
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: Chengze Li, Xueting Liu, Tien-Tsin Wong
- Converted by: Plat
- License: MIT
Model Sources
- Repository: https://github.com/ljsabc/MangaLineExtraction_PyTorch
- Paper: https://ttwong12.github.io/papers/linelearn/linelearn.pdf
- Project page: https://www.cse.cuhk.edu.hk/~ttwong/papers/linelearn/linelearn.html
Citation
BibTeX:
@article{li-2017-deep,
author = {Chengze Li and Xueting Liu and Tien-Tsin Wong},
title = {Deep Extraction of Manga Structural Lines},
journal = {ACM Transactions on Graphics (SIGGRAPH 2017 issue)},
month = {July},
year = {2017},
volume = {36},
number = {4},
pages = {117:1--117:12},
}