Edit model card

This is the trained model for the controlnet-stablediffusion for the scene text eraser (Diff_SceneTextEraser) We have to customize the pipeline for controlnet-stablediffusion-inpaint

Here is the training and inference code for Diff_SceneTextEraser

For direct inference

step 1: Clone the GitHub repo to get the customized ControlNet-StableDiffusion-inpaint Pipeline Implementation

git clone https://github.com/Onkarsus13/Diff_SceneTextEraser

Step2: Go into the repository and install repository, dependency

cd Diff_SceneTextEraser
pip install -e ".[torch]"
pip install -e .[all,dev,notebooks]

Step3: Run python test_eraser.py OR You can run the code given below

 from diffusers import (
    UniPCMultistepScheduler, 
    DDIMScheduler, 
    EulerAncestralDiscreteScheduler,
    StableDiffusionControlNetSceneTextErasingPipeline,
    )
import torch
import numpy as np
import cv2
from PIL import Image, ImageDraw
import math
import os

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_path = "onkarsus13/controlnet_stablediffusion_scenetextEraser"

pipe = StableDiffusionControlNetSceneTextErasingPipeline.from_pretrained(model_path)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.to(device)

# pipe.enable_xformers_memory_efficient_attention()
pipe.enable_model_cpu_offload()
generator = torch.Generator(device).manual_seed(1)

image = Image.open("<path to scene text image>").resize((512, 512))
mask_image = Image.open('<path to the corrospoinding mask image>').resize((512, 512))

image = pipe(
    image,
    mask_image,
    [mask_image],
    num_inference_steps=20,
    generator=generator,
    controlnet_conditioning_scale=1.0,
    guidance_scale=1.0
).images[0]
image.save('test1.png')
Downloads last month
47
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.