Image-to-Video
Safetensors
English
CogVideoX1.5-5B-SAT / README.md
Ubuntu
fix readme
36c9405
|
raw
history blame
3.05 kB
metadata
license: other
language:
  - en
base_model:
  - THUDM/CogVideoX-5b
  - THUDM/CogVideoX-5b-I2V
pipeline_tag: image-to-image

CogVideoX1.1-5B-SAT

πŸ“„ δΈ­ζ–‡ι˜…θ―» | 🌐 Github | πŸ“œ arxiv

πŸ“ Visit QingYing and API Platform to experience commercial video generation models.

CogVideoX is an open-source video generation model originating from Qingying. CogVideoX1.1 is the upgraded version of the open-source CogVideoX model.

The CogVideoX1.1-5B series model supports 10-second videos and higher resolutions. The CogVideoX1.1-5B-I2V variant supports any resolution for video generation.

This repository contains the SAT-weight version of the CogVideoX1.1-5B model, specifically including the following modules:

Transformer

Includes weights for both I2V and T2V models. Specifically, it includes the following modules:

β”œβ”€β”€ transformer_i2v  
β”‚   β”œβ”€β”€ 1000  
β”‚   β”‚   └── mp_rank_00_model_states.pt  
β”‚   └── latest  
└── transformer_t2v  
    β”œβ”€β”€ 1000  
    β”‚   └── mp_rank_00_model_states.pt  
    └── latest  

Please select the corresponding weights when performing inference.

VAE

The VAE part is consistent with the CogVideoX-5B series and does not require updating. You can also download it directly from here. Specifically, it includes the following modules:

└── vae  
    └── 3d-vae.pt  

Text Encoder

Consistent with the diffusers version of CogVideoX-5B, no updates are necessary. You can also download it directly from here. Specifically, it includes the following modules:

β”œβ”€β”€ t5-v1_1-xxl  
   β”œβ”€β”€ added_tokens.json  
   β”œβ”€β”€ config.json  
   β”œβ”€β”€ model-00001-of-00002.safetensors  
   β”œβ”€β”€ model-00002-of-00002.safetensors  
   β”œβ”€β”€ model.safetensors.index.json  
   β”œβ”€β”€ special_tokens_map.json  
   β”œβ”€β”€ spiece.model  
   └── tokenizer_config.json  


0 directories, 8 files  

Model License

This model is released under the CogVideoX LICENSE.

Citation

@article{yang2024cogvideox,
  title={CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer},
  author={Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others},
  journal={arXiv preprint arXiv:2408.06072},
  year={2024}
}