Local Windows Implementation

#2
by bingbangboom - opened

Hi, will you release a local implementation guide for windows? Managed to get this running locally but the results seem to be all over the place and heavily compressed.

Hi, will you release a local implementation guide for windows? Managed to get this running locally but the results seem to be all over the place and heavily compressed.

Is it possible to run this model locally?

Finegrain org

Yes, this space is fully Open Source and can be run locally.

It is highly unlikely that we will release documentation for Windows though as nobody on our team runs it. It can be run on a Linux machine with a GPU with enough RAM (for instance a 3090 or a 4090 should work).

Yes, this space is fully Open Source and can be run locally.

It is highly unlikely that we will release documentation for Windows though as nobody on our team runs it. It can be run on a Linux machine with a GPU with enough RAM (for instance a 3090 or a 4090 should work).

My graphics card model is geforce gtx 960M
Is it possible to run this model for me?

Finegrain org

This graphics card has only 4 GB of VRAM so I do not think it can run, at least not without some changes.

This graphics card has only 4 GB of VRAM so I do not think it can run, at least not without some changes.

If it is possible, optimize it to run on Google Colab and put the script

hello, i'll try running it with docker from my mac m2, but when the docker running it's just stop right here, i try deleting the docker image and try running the script again but still have the same issue, not downloading the models. is there any issue like mine? Thank You
image.png

Sign up or log in to comment