StyleStudio: Text-Driven Style Transfer with Selective Control of Style Elements
Abstract
Text-driven style transfer aims to merge the style of a reference image with content described by a text prompt. Recent advancements in text-to-image models have improved the nuance of style transformations, yet significant challenges remain, particularly with overfitting to reference styles, limiting stylistic control, and misaligning with textual content. In this paper, we propose three complementary strategies to address these issues. First, we introduce a cross-modal Adaptive Instance Normalization (AdaIN) mechanism for better integration of style and text features, enhancing alignment. Second, we develop a Style-based Classifier-Free Guidance (SCFG) approach that enables selective control over stylistic elements, reducing irrelevant influences. Finally, we incorporate a teacher model during early generation stages to stabilize spatial layouts and mitigate artifacts. Our extensive evaluations demonstrate significant improvements in style transfer quality and alignment with textual prompts. Furthermore, our approach can be integrated into existing style transfer frameworks without fine-tuning.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Z-STAR+: A Zero-shot Style Transfer Method via Adjusting Style Distribution (2024)
- FonTS: Text Rendering with Typography and Style Controls (2024)
- Style3D: Attention-guided Multi-view Style Transfer for 3D Object Generation (2024)
- Fast Prompt Alignment for Text-to-Image Generation (2024)
- Style-Friendly SNR Sampler for Style-Driven Generation (2024)
- Style-Pro: Style-Guided Prompt Learning for Generalizable Vision-Language Models (2024)
- SILMM: Self-Improving Large Multimodal Models for Compositional Text-to-Image Generation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper