Has anyone tried running this model on Ollama?
Is it possible for this model to run on Ollama?
ollama pull deepseek-ai/DeepSeek-V3 ok | 17:30:31
pulling manifest
Error: pull model manifest: file does not exist
Hey, I was able to get DeepSeek to run under Olama, both the 33B version and the 6.7B version, g1ibby, not an official source.
The key is to use the correct naming format (e.g., deepseek:33b)
Hey, I was able to get DeepSeek to run under Olama, both the 33B version and the 6.7B version, g1ibby, not an official source.
The key is to use the correct naming format (e.g., deepseek:33b)
That's 2.5 not v3?
getting unsupported platform error when i am creating the model with ollama...
until now no support for gguf ?
gemini2 has it already in scope ;-)
It appears you're interested in running DeepSeek-V3 on Ollama. Here's a breakdown of the current situation and how support is progressing:
Current Status:
Active Development: There's active work happening to bring DeepSeek-V3 to Ollama. This is being tracked in the Ollama GitHub repository, specifically in issue #8249 and related discussions.1
1.
DeepSeek v3 · Issue #8249 - GitHub
github.com
GGML/GGUF Conversion: A key step is ensuring the model is available in the GGML or GGUF format, which is compatible with Ollama and llama.cpp. There's work being done to provide this conversion.
llama.cpp Support: The underlying llama.cpp library that Ollama uses has recently merged support for DeepSeek-V3. This is a crucial step for Ollama integration.
How to Follow Progress:
Ollama GitHub: Keep an eye on the Ollama GitHub repository, particularly the issues mentioned above (#8249, #8268), for updates from the developers.
llama.cpp Repository: You can also check the llama.cpp repository for any further developments related to DeepSeek-V3 support.
General Requirements for Running Large Models:
While waiting for official support, it's worth noting the general hardware requirements for running large language models like DeepSeek-V3:
Significant RAM: These models require a substantial amount of RAM. Depending on the quantization and model size, you might need hundreds of gigabytes of RAM.
Powerful Hardware: A powerful CPU and potentially a dedicated GPU can significantly improve inference speed.
In summary: Support for DeepSeek-V3 in Ollama is in progress. Keep an eye on the relevant GitHub repositories for updates. In the meantime, ensure you have the necessary hardware resources to run such a large model.
So we have to wait ......