fuzzy-mittenz commited on
Commit
805b94d
·
verified ·
1 Parent(s): 04c5498

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -39
README.md CHANGED
@@ -23,46 +23,10 @@ tags:
23
  - gguf-my-repo
24
  ---
25
 
26
- # fuzzy-mittenz/Dolphin3.0-Qwen2.5-1.5B-Q8_0-GGUF
27
- This model was converted to GGUF format from [`cognitivecomputations/Dolphin3.0-Qwen2.5-1.5B`](https://huggingface.co/cognitivecomputations/Dolphin3.0-Qwen2.5-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
28
- Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin3.0-Qwen2.5-1.5B) for more details on the model.
29
-
30
- ## Use with llama.cpp
31
- Install llama.cpp through brew (works on Mac and Linux)
32
-
33
- ```bash
34
- brew install llama.cpp
35
-
36
- ```
37
- Invoke the llama.cpp server or the CLI.
38
 
39
- ### CLI:
40
- ```bash
41
- llama-cli --hf-repo fuzzy-mittenz/Dolphin3.0-Qwen2.5-1.5B-Q8_0-GGUF --hf-file dolphin3.0-qwen2.5-1.5b-q8_0.gguf -p "The meaning to life and the universe is"
42
- ```
43
 
44
- ### Server:
45
- ```bash
46
- llama-server --hf-repo fuzzy-mittenz/Dolphin3.0-Qwen2.5-1.5B-Q8_0-GGUF --hf-file dolphin3.0-qwen2.5-1.5b-q8_0.gguf -c 2048
47
- ```
48
 
49
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
50
-
51
- Step 1: Clone llama.cpp from GitHub.
52
- ```
53
- git clone https://github.com/ggerganov/llama.cpp
54
- ```
55
-
56
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
57
- ```
58
- cd llama.cpp && LLAMA_CURL=1 make
59
- ```
60
 
61
- Step 3: Run inference through the main binary.
62
- ```
63
- ./llama-cli --hf-repo fuzzy-mittenz/Dolphin3.0-Qwen2.5-1.5B-Q8_0-GGUF --hf-file dolphin3.0-qwen2.5-1.5b-q8_0.gguf -p "The meaning to life and the universe is"
64
- ```
65
- or
66
- ```
67
- ./llama-server --hf-repo fuzzy-mittenz/Dolphin3.0-Qwen2.5-1.5B-Q8_0-GGUF --hf-file dolphin3.0-qwen2.5-1.5b-q8_0.gguf -c 2048
68
- ```
 
23
  - gguf-my-repo
24
  ---
25
 
26
+ # IntelligentEstate/Fast-Dolphin_QwenStar-1.5B-Q8-GGUF
 
 
 
 
 
 
 
 
 
 
 
27
 
 
 
 
 
28
 
 
 
 
 
29
 
30
+ This model was converted to GGUF format from [`cognitivecomputations/Dolphin3.0-Qwen2.5-1.5B`](https://huggingface.co/cognitivecomputations/Dolphin3.0-Qwen2.5-1.5B) using llama.cpp
31
+ Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin3.0-Qwen2.5-1.5B) for more details on the model.
 
 
 
 
 
 
 
 
 
32