--- license: other language: - en base_model: - THUDM/CogVideoX-5b - THUDM/CogVideoX-5b-I2V pipeline_tag: image-to-image --- # CogVideoX1.1-5B-SAT

πŸ“„ δΈ­ζ–‡ι˜…θ―» | 🌐 Github | πŸ“œ arxiv

πŸ“ Visit QingYing and API Platform to experience commercial video generation models.

CogVideoX is an open-source video generation model originating from [Qingying](https://chatglm.cn/video?fr=osm_cogvideo). CogVideoX1.1 is the upgraded version of the open-source CogVideoX model. The CogVideoX1.1-5B series model supports **10-second** videos and higher resolutions. The `CogVideoX1.1-5B-I2V` variant supports **any resolution** for video generation. This repository contains the SAT-weight version of the CogVideoX1.1-5B model, specifically including the following modules: ## Transformer Includes weights for both I2V and T2V models. Specifically, it includes the following modules: ``` β”œβ”€β”€ transformer_i2v β”‚Β Β  β”œβ”€β”€ 1000 β”‚Β Β  β”‚Β Β  └── mp_rank_00_model_states.pt β”‚Β Β  └── latest └── transformer_t2v β”œβ”€β”€ 1000 β”‚Β Β  └── mp_rank_00_model_states.pt └── latest ``` Please select the corresponding weights when performing inference. ## VAE The VAE part is consistent with the CogVideoX-5B series and does not require updating. You can also download it directly from here. Specifically, it includes the following modules: ``` └── vae └── 3d-vae.pt ``` ## Text Encoder Consistent with the diffusers version of CogVideoX-5B, no updates are necessary. You can also download it directly from here. Specifically, it includes the following modules: ``` β”œβ”€β”€ t5-v1_1-xxl Β Β  β”œβ”€β”€ added_tokens.json Β Β  β”œβ”€β”€ config.json Β Β  β”œβ”€β”€ model-00001-of-00002.safetensors Β Β  β”œβ”€β”€ model-00002-of-00002.safetensors Β Β  β”œβ”€β”€ model.safetensors.index.json Β Β  β”œβ”€β”€ special_tokens_map.json Β Β  β”œβ”€β”€ spiece.model Β Β  └── tokenizer_config.json 0 directories, 8 files ``` ## Model License This model is released under the [CogVideoX LICENSE](LICENSE). ## Citation ``` @article{yang2024cogvideox, title={CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer}, author={Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others}, journal={arXiv preprint arXiv:2408.06072}, year={2024} } ```