ariG23498 HF staff commited on
Commit
b026891
1 Parent(s): 18ff4e4

update readme for card generation

Browse files

You can now run gguf models from HF directly using ollama.

This PR introduces a README change to the card generation code which outlines the ollama usage.

Files changed (1) hide show
  1. app.py +8 -0
app.py CHANGED
@@ -174,6 +174,14 @@ def process_model(model_id, q_method, use_imatrix, imatrix_q_method, private_rep
174
  # {new_repo_id}
175
  This model was converted to GGUF format from [`{model_id}`](https://huggingface.co/{model_id}) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
176
  Refer to the [original model card](https://huggingface.co/{model_id}) for more details on the model.
 
 
 
 
 
 
 
 
177
 
178
  ## Use with llama.cpp
179
  Install llama.cpp through brew (works on Mac and Linux)
 
174
  # {new_repo_id}
175
  This model was converted to GGUF format from [`{model_id}`](https://huggingface.co/{model_id}) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
176
  Refer to the [original model card](https://huggingface.co/{model_id}) for more details on the model.
177
+
178
+ ## Use with ollama
179
+ Install ollama from the [official website](https://ollama.com/).
180
+
181
+ Run the model on the CLI.
182
+ ```sh
183
+ ollama run hf.co/{new_repo_url}
184
+ ```
185
 
186
  ## Use with llama.cpp
187
  Install llama.cpp through brew (works on Mac and Linux)