--- license: mit --- # pOps: Photo-Inspired Diffusion Operators
[**Project Page**](https://popspaper.github.io/pOps/) **|** [**Paper**](https://popspaper.github.io/pOps/static/files/pOps_paper.pdf) **|** [**Code**](https://github.com/pOpsPaper/pOps)
--- ## Introduction

Different operators trained using pOps. Our method learns operators that are applied directly in the image embedding space, resulting in a variety of semantic operations that can then be realized as images using an image diffusion model.

## Trained Operators - [Texturing Operator](https://huggingface.co/pOpsPaper/operators/blob/main/models/texturing/learned_prior.pth): Given an image embedding of an object and an image embedding of a texture exemplar, paint the object with the provided texture. - [Scene Operator](https://huggingface.co/pOpsPaper/operators/blob/main/models/scene/learned_prior.pth): Given an image embedding of an object and an image embedding representing a scene layout, generate an image placing the object within a semantically similar scene. - [Union Operator](https://huggingface.co/pOpsPaper/operators/blob/main/models/union/learned_prior.pth): Given two image embeddings representing scenes with one or multiple objects, combine the objects appearing in the scenes into a single embedding composed of both objects. - [Instruct Operator](https://huggingface.co/pOpsPaper/operators/blob/main/models/instruct/learned_prior.pth): Given an image embedding of an object and a single-word adjective, apply the adjective to the image embedding, altering its characteristics accordingly. ## Inference See the [pOps repo](https://github.com/pOpsPaper/pOps) for inference using the pretrained models