patrickvonplaten
commited on
Commit
•
13c351d
1
Parent(s):
f61a46b
Update README.md
Browse files
README.md
CHANGED
@@ -76,7 +76,7 @@ Here are some results:
|
|
76 |
## Long Video Generation
|
77 |
|
78 |
You can optimize for memory usage by enabling attention and VAE slicing and using Torch 2.0.
|
79 |
-
This should allow you to generate videos up to
|
80 |
|
81 |
```bash
|
82 |
$ pip install git+https://github.com/huggingface/diffusers transformers accelerate
|
@@ -96,8 +96,8 @@ pipe.enable_model_cpu_offload()
|
|
96 |
pipe.enable_vae_slicing()
|
97 |
|
98 |
# generate
|
99 |
-
prompt =
|
100 |
-
video_frames = pipe(prompt, num_inference_steps=25, num_frames=
|
101 |
|
102 |
# convent to video
|
103 |
video_path = export_to_video(video_frames)
|
|
|
76 |
## Long Video Generation
|
77 |
|
78 |
You can optimize for memory usage by enabling attention and VAE slicing and using Torch 2.0.
|
79 |
+
This should allow you to generate videos up to 25 seconds on less than 16GB of GPU VRAM.
|
80 |
|
81 |
```bash
|
82 |
$ pip install git+https://github.com/huggingface/diffusers transformers accelerate
|
|
|
96 |
pipe.enable_vae_slicing()
|
97 |
|
98 |
# generate
|
99 |
+
prompt = Spiderman is surfing. Darth Vader is also surfing and following Spiderman"
|
100 |
+
video_frames = pipe(prompt, num_inference_steps=25, num_frames=200).frames
|
101 |
|
102 |
# convent to video
|
103 |
video_path = export_to_video(video_frames)
|