--- license: other license_name: bria-2.3 license_link: https://bria.ai/bria-huggingface-model-license-agreement/ inference: false tags: - text-to-image - controlnet model - legal liability - commercial use extra_gated_description: BRIA 2.3 ControlNet-Background Generation requires access to BRIA 2.3 Text-to-Image model extra_gated_heading: "Fill in this form to get access" extra_gated_fields: Name: type: text Company/Org name: type: text Org Type (Early/Growth Startup, Enterprise, Academy): type: text Role: type: text Country: type: text Email: type: text By submitting this form, I agree to BRIA’s Privacy policy and Terms & conditions, see links below: type: checkbox --- # BRIA-2.3-ControlNet-Background-Generation, Model Card BRIA 2.3 ControlNet-Background Generation, trained on the foundation of [BRIA 2.3 Text-to-Image](https://huggingface.co/briaai/BRIA-2.3), enables the generation of high-quality images guided by a textual prompt and the extracted background mask estimation from an input image. This allows for the creation of different background variations of an image, all sharing the same foreground. ### Get Access BRIA 2.3 ControlNet-Background Generation requires access to BRIA 2.3 Foundationmodel. For more information, [click here](https://huggingface.co/briaai/BRIA-2.3). - **API Endpoint**: [Bria.ai](https://platform.bria.ai/console/api/image-editing), [fal.ai](https://fal.ai/models/fal-ai/bria/background/replace) - **ComfyUI**: [Use it in workflows](https://github.com/Bria-AI/ComfyUI-BRIA-API) For more information, please visit our [website](https://bria.ai/). Join our [Discord community](https://discord.gg/Nxe9YW9zHS) for more information, tutorials, tools, and to connect with other users! [CLICK HERE FOR A DEMO](https://huggingface.co/spaces/briaai/BRIA-Background-Generation) ![examples](bg_img.png) ### Model Description - **Developed by:** BRIA AI - **Model type:** [ControlNet](https://huggingface.co/docs/diffusers/using-diffusers/controlnet) for Latent diffusion - **License:** [bria-2.3](https://bria.ai/bria-huggingface-model-license-agreement/) - **Model Description:** ControlNet Background-Generation for BRIA 2.3 Text-to-Image model. The model generates images guided by text and the background mask. - **Resources for more information:** [BRIA AI](https://bria.ai/) ## Usage Installation Install huggingface_hub and login if need to - \ https://huggingface.co/docs/huggingface_hub/en/guides/cli#getting-started \ https://huggingface.co/docs/huggingface_hub/en/quick-start#authentication Download and install BRIA-2.3-ControlNet-BG-Gen ```bash pip install -qr https://huggingface.co/briaai/BRIA-2.3-ControlNet-BG-Gen/resolve/main/requirements.txt ``` ```bash torch torchvision pillow numpy scikit-image Diffusers==0.31.0 transformers>=4.39.1 ``` ```bash huggingface-cli download briaai/BRIA-2.3-ControlNet-BG-Gen --include replace_bg/* --local-dir . --quiet ``` Run Inpainting script ```python import torch from diffusers import ( AutoencoderKL, EulerAncestralDiscreteScheduler, ) from diffusers.utils import load_image from replace_bg.model.pipeline_controlnet_sd_xl import StableDiffusionXLControlNetPipeline from replace_bg.model.controlnet import ControlNetModel from replace_bg.utilities import resize_image, remove_bg_from_image, paste_fg_over_image, get_control_image_tensor controlnet = ControlNetModel.from_pretrained("briaai/BRIA-2.3-ControlNet-BG-Gen", torch_dtype=torch.float16) vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) pipe = StableDiffusionXLControlNetPipeline.from_pretrained("briaai/BRIA-2.3", controlnet=controlnet, torch_dtype=torch.float16, vae=vae).to('cuda:0') pipe.scheduler = EulerAncestralDiscreteScheduler( beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000, steps_offset=1 ) image_path = "https://farm5.staticflickr.com/4007/4322154488_997e69e4cf_z.jpg" image = load_image(image_path) image = resize_image(image) mask = remove_bg_from_image(image_path) control_tensor = get_control_image_tensor(pipe.vae, image, mask) prompt = "in a zoo" negative_prompt = "Logo,Watermark,Text,Ugly,Bad proportions,Bad quality,Out of frame,Mutation" generator = torch.Generator(device="cuda:0").manual_seed(0) gen_img = pipe( negative_prompt=negative_prompt, prompt=prompt, controlnet_conditioning_scale=1.0, num_inference_steps=50, image = control_tensor, generator=generator ).images[0] result_image = paste_fg_over_image(gen_img, image, mask) ```