CineScale: Free Lunch in High-Resolution Cinematic Visual Generation
Abstract
CineScale is a novel inference paradigm that enables high-resolution visual generation for both images and videos without extensive fine-tuning, addressing issues of repetitive patterns and high-frequency information accumulation.
Visual diffusion models achieve remarkable progress, yet they are typically trained at limited resolutions due to the lack of high-resolution data and constrained computation resources, hampering their ability to generate high-fidelity images or videos at higher resolutions. Recent efforts have explored tuning-free strategies to exhibit the untapped potential higher-resolution visual generation of pre-trained models. However, these methods are still prone to producing low-quality visual content with repetitive patterns. The key obstacle lies in the inevitable increase in high-frequency information when the model generates visual content exceeding its training resolution, leading to undesirable repetitive patterns deriving from the accumulated errors. In this work, we propose CineScale, a novel inference paradigm to enable higher-resolution visual generation. To tackle the various issues introduced by the two types of video generation architectures, we propose dedicated variants tailored to each. Unlike existing baseline methods that are confined to high-resolution T2I and T2V generation, CineScale broadens the scope by enabling high-resolution I2V and V2V synthesis, built atop state-of-the-art open-source video generation frameworks. Extensive experiments validate the superiority of our paradigm in extending the capabilities of higher-resolution visual generation for both image and video models. Remarkably, our approach enables 8k image generation without any fine-tuning, and achieves 4k video generation with only minimal LoRA fine-tuning. Generated video samples are available at our website: https://eyeline-labs.github.io/CineScale/.
Community
CineScale is an extended work of FreeScale for higher-resolution video generation, unlocking the 4k video generation!
Project Page: https://eyeline-labs.github.io/CineScale/
Code Repo: https://github.com/Eyeline-Labs/CineScale
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- TurboVSR: Fantastic Video Upscalers and Where to Find Them (2025)
- DAM-VSR: Disentanglement of Appearance and Motion for Video Super-Resolution (2025)
- HiMat: DiT-based Ultra-High Resolution SVBRDF Generation (2025)
- APT: Improving Diffusion Models for High Resolution Image Generation with Adaptive Path Tracing (2025)
- FreeLong++: Training-Free Long Video Generation via Multi-band SpectralFusion (2025)
- RAGSR: Regional Attention Guided Diffusion for Image Super-Resolution (2025)
- A Survey on Long-Video Storytelling Generation: Architectures, Consistency, and Cinematic Quality (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper