Spaces:
Running
Apply for community grant: Personal project (gpu and storage)
Hello, If this project can use community grant resources, That would be very convenient for the users!
I hope there will be resources for:
GPU : To run large
Whisper models, you need about ~10GB VRAM GPU.
You can see the approximate VRAM usage for the models here: https://github.com/SYSTRAN/faster-whisper/blob/master/README.md
Persistent Storage: To use pre-downloaded models without requiring users to download the model each time they use the WebUI. (If I can push only the faster-whipser version large-v3
model (about 5.75GB model) to the space, that would be enough. )
Hi @hysts , thanks for the quick response and the assignment for the ZeroGPU.
But it seems to be faster-whisper is not able to use in ZeroGPU.
I got the error of
Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory Please make sure libcudnn_ops_infer.so.8 is in your library path!
According to here, faster-whisper is compatible with cuda 11.2
and cudnn 8.1
.
Based on my research, ZeroGPU uses cuda 11.7
and cudnn 8.5
. And I guess this is not compatible with faster-whisper.
Can I change the version of CUDA and cuDNN in ZeroGPU if possible?
If it's not possible, I guess I should use OpenAI Whisper implementation rather than faster-whisper in the space.
It works fine with just OpenAI Whisper implementation.