The speed of inference is a bit slow!

#2
by walkingwithGod - opened

Hello brothers, thank you for your excellent work. The project's results are fantastic; the only issue is that it's a bit slow. I've deployed this demo locally, but when testing with 1080P video, I found it to be quite slow. It took about 7 minutes to process a 17-second video. Is there any way to speed up the processing? Looking forward to your reply!

INNOVA AI org
β€’
edited Oct 13

Hi @walkingwithGod
I was going to share this demo with you, but glad you have already found it ^^
The issue stems from the background removal model itself. The bigger the image resolution, the slower it gets.
I wanted to emphasize on the fact that unlike text generation models that use KV caching to generate tokens faster based on the previous output, we're processing each frame of the video individually, so time the processing time of 1 image with the total number of frames.
I know the model is really slow but that's one of the perks of using an 800Mb model.

Hope this explains things.

Hi @walkingwithGod
I was going to share this demo with you, but glad you have already found it ^^
The issue stems from the background removal model itself. The bigger the image resolution, the slower it gets. I heard the original creator is working with another bigger model, which should, in theory, allocate a bit more GPU but process images faster.
I wanted to emphasize on the fact that unlike text generation models that use KV caching to generate tokens faster based on the previous output, we're processing each frame of the video individually, so time the processing time of 1 image with the total number of frames.
I know the model is really slow but that's one of the perks of using an 800Mb model.

Hope this explains things.

Thank you @not-lain , my friend^^. May God bless you and your excellent work^^.

INNOVA AI org

@walkingwithGod I added a fast mode that uses BriefNet Lite, which may result in some quality loss but should significantly speed up inference. Could you please test it with the same 17-second video and let me know how it performs? Thank you!

@walkingwithGod I added a fast mode that uses BriefNet Lite, which may result in some quality loss but should significantly speed up inference. Could you please test it with the same 17-second video and let me know how it performs? Thank you!

I have tested the new model, It took about 5 minutes to process the same 17-second video.It does feel a bit faster, but it's still relatively slow. I hope for further improvements in the future. Thank you for your excellent work.^^ ^^

INNOVA AI org

@walkingwithGod I've now added parallel processing. Could you please test it once more and inform me about its performance? Additionally, note that you can adjust the number of parallel image processes through the max workers setting. So, experiment to see which setting works best with your device. Thank you!

I apologize, my friend, I've been quite busy these past few days and haven't had time to test. Today, I tested with the latest model, and for the same 17-second video, it took 183 seconds, which is a further improvement in speed! I'm not sure if, like other AI projects, it would be faster to first compress the image, then remove the background, and finally restore the image size. Just something for you to consider. Thank you for your excellent work!

Can you tell me how I can use this without the 200 frame limit on my local computer?

Sign up or log in to comment