--- license: other language: - en base_model: - THUDM/CogVideoX-5b - THUDM/CogVideoX-5b-I2V pipeline_tag: image-to-image --- # CogVideoX1.1-5B-SAT
π δΈζι θ―» | π Github | π arxiv
π Visit QingYing and API Platform to experience commercial video generation models.
CogVideoX is an open-source video generation model originating from [Qingying](https://chatglm.cn/video?fr=osm_cogvideo). CogVideoX1.1 is the upgraded version of the open-source CogVideoX model. The CogVideoX1.1-5B series model supports **10-second** videos and higher resolutions. The `CogVideoX1.1-5B-I2V` variant supports **any resolution** for video generation. This repository contains the SAT-weight version of the CogVideoX1.1-5B model, specifically including the following modules: ## Transformer Includes weights for both I2V and T2V models. Specifically, it includes the following modules: ``` βββ transformer_i2v βΒ Β βββ 1000 βΒ Β βΒ Β βββ mp_rank_00_model_states.pt βΒ Β βββ latest βββ transformer_t2v βββ 1000 βΒ Β βββ mp_rank_00_model_states.pt βββ latest ``` Please select the corresponding weights when performing inference. ## VAE The VAE part is consistent with the CogVideoX-5B series and does not require updating. You can also download it directly from here. Specifically, it includes the following modules: ``` βββ vae βββ 3d-vae.pt ``` ## Text Encoder Consistent with the diffusers version of CogVideoX-5B, no updates are necessary. You can also download it directly from here. Specifically, it includes the following modules: ``` βββ t5-v1_1-xxl Β Β βββ added_tokens.json Β Β βββ config.json Β Β βββ model-00001-of-00002.safetensors Β Β βββ model-00002-of-00002.safetensors Β Β βββ model.safetensors.index.json Β Β βββ special_tokens_map.json Β Β βββ spiece.model Β Β βββ tokenizer_config.json 0 directories, 8 files ``` ## Model License This model is released under the [CogVideoX LICENSE](LICENSE). ## Citation ``` @article{yang2024cogvideox, title={CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer}, author={Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others}, journal={arXiv preprint arXiv:2408.06072}, year={2024} } ```