Hi team!
What Space configuration in huggingface do you use to host the Qwen1.5-72B-Chat-GGUF model?
What is the best GPU to run locally do you recommend?
Thanks
· Sign up or log in to comment