Spaces:
Running
on
Zero
Running
on
Zero
Can I use this locally with other model like qwen2 7b etc?(+ this is support load in 8 bit model?)
#1
by
Clausss
- opened
same as title
Yes, it's definitely possible to run locally too. The best approach depends on your setup GPU vs CPU inference etc. There is one example from @mrm8488 of using Ollama shared in this post (https://huggingface.co/posts/mrm8488/799935689571130) /and on GitHub https://github.com/mrm8488/magpie-ollama-datagen. If you give me a bit more context about your setup, I can try and give some more pointers.
Thanks for explain
Clausss
changed discussion status to
closed