Hi, I have a question for you all, is there a way to save, dump the current state of a ggml model running in llama.cpp that has been thinking for 8+ hours? and is doing preatty cool stuff
· Sign up or log in to comment