license: other
language:
- en
base_model:
- THUDM/CogVideoX-5b
- THUDM/CogVideoX-5b-I2V
pipeline_tag: image-to-image
CogVideoX1.1-5B-SAT
π δΈζι θ―» | π Github | π arxiv
π Visit QingYing and API Platform to experience commercial video generation models.
CogVideoX is an open-source video generation model originating from Qingying. CogVideoX1.1 is the upgraded version of the open-source CogVideoX model.
The CogVideoX1.1-5B series model supports 10-second videos and higher resolutions. The CogVideoX1.1-5B-I2V
variant supports any resolution for video generation.
This repository contains the SAT-weight version of the CogVideoX1.1-5B model, specifically including the following modules:
Transformer
Includes weights for both I2V and T2V models. Specifically, it includes the following modules:
βββ transformer_i2v
β βββ 1000
β β βββ mp_rank_00_model_states.pt
β βββ latest
βββ transformer_t2v
βββ 1000
β βββ mp_rank_00_model_states.pt
βββ latest
Please select the corresponding weights when performing inference.
VAE
The VAE part is consistent with the CogVideoX-5B series and does not require updating. You can also download it directly from here. Specifically, it includes the following modules:
βββ vae
βββ 3d-vae.pt
Text Encoder
Consistent with the diffusers version of CogVideoX-5B, no updates are necessary. You can also download it directly from here. Specifically, it includes the following modules:
βββ t5-v1_1-xxl
βββ added_tokens.json
βββ config.json
βββ model-00001-of-00002.safetensors
βββ model-00002-of-00002.safetensors
βββ model.safetensors.index.json
βββ special_tokens_map.json
βββ spiece.model
βββ tokenizer_config.json
0 directories, 8 files
Model License
This model is released under the CogVideoX LICENSE.
Citation
@article{yang2024cogvideox,
title={CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer},
author={Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others},
journal={arXiv preprint arXiv:2408.06072},
year={2024}
}