bubbliiiing commited on
Commit
00e70f4
·
1 Parent(s): f2649d2

Update Readme

Browse files
Files changed (2) hide show
  1. README.md +3 -2
  2. README_en.md +3 -3
README.md CHANGED
@@ -86,8 +86,7 @@ EasyAnimateV5.1:
86
  <video src="https://github.com/user-attachments/assets/7f62795a-2b3b-4c14-aeb1-1230cb818067" width="100%" controls autoplay loop></video>
87
  </td>
88
  <td>
89
- <video src="https://github.com/user-attachments/assets/b581df84-ade1-4605-a7a8-fd735ce3e222
90
- " width="100%" controls autoplay loop></video>
91
  </td>
92
  </tr>
93
  </table>
@@ -360,6 +359,8 @@ EasyAnimateV5.1-7B的视频大小可以由不同的GPU Memory生成,包括:
360
  | 40GB | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ |
361
  | 80GB | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
362
 
 
 
363
  ✅ 表示它可以在"model_cpu_offload"的情况下运行,🧡代表它可以在"model_cpu_offload_and_qfloat8"的情况下运行,⭕️ 表示它可以在"sequential_cpu_offload"的情况下运行,❌ 表示它无法运行。请注意,使用sequential_cpu_offload运行会更慢。
364
 
365
  有一些不支持torch.bfloat16的卡型,如2080ti、V100,需要将app.py、predict文件中的weight_dtype修改为torch.float16才可以运行。
 
86
  <video src="https://github.com/user-attachments/assets/7f62795a-2b3b-4c14-aeb1-1230cb818067" width="100%" controls autoplay loop></video>
87
  </td>
88
  <td>
89
+ <video src="https://github.com/user-attachments/assets/b581df84-ade1-4605-a7a8-fd735ce3e222" width="100%" controls autoplay loop></video>
 
90
  </td>
91
  </tr>
92
  </table>
 
359
  | 40GB | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ |
360
  | 80GB | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
361
 
362
+ 由于qwen2-vl-7b的float16的权重,无法在16GB显存下运行,如果您的显存是16GB,请前往[Huggingface](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int8)或者[Modelscope](https://modelscope.cn/models/Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int8)下载量化后的qwen2-vl-7b对原有的text encoder进行替换,并安装对应的依赖库(auto-gptq, optimum)。
363
+
364
  ✅ 表示它可以在"model_cpu_offload"的情况下运行,🧡代表它可以在"model_cpu_offload_and_qfloat8"的情况下运行,⭕️ 表示它可以在"sequential_cpu_offload"的情况下运行,❌ 表示它无法运行。请注意,使用sequential_cpu_offload运行会更慢。
365
 
366
  有一些不支持torch.bfloat16的卡型,如2080ti、V100,需要将app.py、predict文件中的weight_dtype修改为torch.float16才可以运行。
README_en.md CHANGED
@@ -56,8 +56,7 @@ EasyAnimateV5.1:
56
  <video src="https://github.com/user-attachments/assets/7f62795a-2b3b-4c14-aeb1-1230cb818067" width="100%" controls autoplay loop></video>
57
  </td>
58
  <td>
59
- <video src="https://github.com/user-attachments/assets/b581df84-ade1-4605-a7a8-fd735ce3e222
60
- " width="100%" controls autoplay loop></video>
61
  </td>
62
  </tr>
63
  </table>
@@ -325,7 +324,7 @@ The video size for EasyAnimateV5.1-12B can be generated by different GPU Memory,
325
  | 40GB | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ |
326
  | 80GB | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
327
 
328
- The video size for EasyAnimateV5-7B can be generated by different GPU Memory, including:
329
  | GPU memory | 384x672x72 | 384x672x49 | 576x1008x25 | 576x1008x49 | 768x1344x25 | 768x1344x49 |
330
  |------------|------------|------------|------------|------------|------------|------------|
331
  | 16GB | 🧡 | 🧡 | ❌ | ❌ | ❌ | ❌ |
@@ -333,6 +332,7 @@ The video size for EasyAnimateV5-7B can be generated by different GPU Memory, in
333
  | 40GB | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ |
334
  | 80GB | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
335
 
 
336
 
337
  ✅ indicates it can run under "model_cpu_offload", 🧡 represents it can run under "model_cpu_offload_and_qfloat8", ⭕️ indicates it can run under "sequential_cpu_offload", ❌ means it can't run. Please note that running with sequential_cpu_offload will be slower.
338
 
 
56
  <video src="https://github.com/user-attachments/assets/7f62795a-2b3b-4c14-aeb1-1230cb818067" width="100%" controls autoplay loop></video>
57
  </td>
58
  <td>
59
+ <video src="https://github.com/user-attachments/assets/b581df84-ade1-4605-a7a8-fd735ce3e222" width="100%" controls autoplay loop></video>
 
60
  </td>
61
  </tr>
62
  </table>
 
324
  | 40GB | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ |
325
  | 80GB | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
326
 
327
+ The video size for EasyAnimateV5.1-7B can be generated by different GPU Memory, including:
328
  | GPU memory | 384x672x72 | 384x672x49 | 576x1008x25 | 576x1008x49 | 768x1344x25 | 768x1344x49 |
329
  |------------|------------|------------|------------|------------|------------|------------|
330
  | 16GB | 🧡 | 🧡 | ❌ | ❌ | ❌ | ❌ |
 
332
  | 40GB | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ |
333
  | 80GB | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
334
 
335
+ Due to the float16 weights of qwen2-vl-7b, it cannot run on a 16GB GPU. If your GPU memory is 16GB, please visit [Huggingface](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int8) or [Modelscope](https://modelscope.cn/models/Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int8) to download the quantized version of qwen2-vl-7b to replace the original text encoder, and install the corresponding dependency libraries (auto-gptq, optimum).
336
 
337
  ✅ indicates it can run under "model_cpu_offload", 🧡 represents it can run under "model_cpu_offload_and_qfloat8", ⭕️ indicates it can run under "sequential_cpu_offload", ❌ means it can't run. Please note that running with sequential_cpu_offload will be slower.
338