itlwas commited on
Commit
c4a850c
1 Parent(s): 9fb4a03

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -17,7 +17,7 @@ license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
17
  library_name: transformers
18
  ---
19
 
20
- # AIronMind/Falcon3-1B-Instruct-abliterated-Q4_K_M-GGUF
21
  This model was converted to GGUF format from [`huihui-ai/Falcon3-1B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Falcon3-1B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
22
  Refer to the [original model card](https://huggingface.co/huihui-ai/Falcon3-1B-Instruct-abliterated) for more details on the model.
23
 
@@ -32,12 +32,12 @@ Invoke the llama.cpp server or the CLI.
32
 
33
  ### CLI:
34
  ```bash
35
- llama-cli --hf-repo AIronMind/Falcon3-1B-Instruct-abliterated-Q4_K_M-GGUF --hf-file falcon3-1b-instruct-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
36
  ```
37
 
38
  ### Server:
39
  ```bash
40
- llama-server --hf-repo AIronMind/Falcon3-1B-Instruct-abliterated-Q4_K_M-GGUF --hf-file falcon3-1b-instruct-abliterated-q4_k_m.gguf -c 2048
41
  ```
42
 
43
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
@@ -54,9 +54,9 @@ cd llama.cpp && LLAMA_CURL=1 make
54
 
55
  Step 3: Run inference through the main binary.
56
  ```
57
- ./llama-cli --hf-repo AIronMind/Falcon3-1B-Instruct-abliterated-Q4_K_M-GGUF --hf-file falcon3-1b-instruct-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
58
  ```
59
  or
60
  ```
61
- ./llama-server --hf-repo AIronMind/Falcon3-1B-Instruct-abliterated-Q4_K_M-GGUF --hf-file falcon3-1b-instruct-abliterated-q4_k_m.gguf -c 2048
62
  ```
 
17
  library_name: transformers
18
  ---
19
 
20
+ # itlwas/Falcon3-1B-Instruct-abliterated-Q4_K_M-GGUF
21
  This model was converted to GGUF format from [`huihui-ai/Falcon3-1B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Falcon3-1B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
22
  Refer to the [original model card](https://huggingface.co/huihui-ai/Falcon3-1B-Instruct-abliterated) for more details on the model.
23
 
 
32
 
33
  ### CLI:
34
  ```bash
35
+ llama-cli --hf-repo itlwas/Falcon3-1B-Instruct-abliterated-Q4_K_M-GGUF --hf-file falcon3-1b-instruct-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
36
  ```
37
 
38
  ### Server:
39
  ```bash
40
+ llama-server --hf-repo itlwas/Falcon3-1B-Instruct-abliterated-Q4_K_M-GGUF --hf-file falcon3-1b-instruct-abliterated-q4_k_m.gguf -c 2048
41
  ```
42
 
43
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
54
 
55
  Step 3: Run inference through the main binary.
56
  ```
57
+ ./llama-cli --hf-repo itlwas/Falcon3-1B-Instruct-abliterated-Q4_K_M-GGUF --hf-file falcon3-1b-instruct-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
58
  ```
59
  or
60
  ```
61
+ ./llama-server --hf-repo itlwas/Falcon3-1B-Instruct-abliterated-Q4_K_M-GGUF --hf-file falcon3-1b-instruct-abliterated-q4_k_m.gguf -c 2048
62
  ```