qingy2024 commited on
Commit
4117c7e
1 Parent(s): 8ae4c68

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +44 -8
README.md CHANGED
@@ -1,22 +1,58 @@
1
  ---
2
- base_model: unsloth/qwen2.5-3b-bnb-4bit
3
  tags:
4
  - text-generation-inference
5
  - transformers
6
  - unsloth
7
  - qwen2
8
- - gguf
 
 
9
  license: apache-2.0
10
  language:
11
  - en
12
  ---
13
 
14
- # Uploaded model
 
 
15
 
16
- - **Developed by:** qingy2024
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/qwen2.5-3b-bnb-4bit
19
 
20
- This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: qingy2024/GRMR-3B-Instruct-v2
3
  tags:
4
  - text-generation-inference
5
  - transformers
6
  - unsloth
7
  - qwen2
8
+ - trl
9
+ - llama-cpp
10
+ - gguf-my-repo
11
  license: apache-2.0
12
  language:
13
  - en
14
  ---
15
 
16
+ # qingy2024/GRMR-3B-Instruct-v2-Q8_0-GGUF
17
+ This model was converted to GGUF format from [`qingy2024/GRMR-3B-Instruct-v2`](https://huggingface.co/qingy2024/GRMR-3B-Instruct-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
18
+ Refer to the [original model card](https://huggingface.co/qingy2024/GRMR-3B-Instruct-v2) for more details on the model.
19
 
20
+ ## Use with llama.cpp
21
+ Install llama.cpp through brew (works on Mac and Linux)
 
22
 
23
+ ```bash
24
+ brew install llama.cpp
25
 
26
+ ```
27
+ Invoke the llama.cpp server or the CLI.
28
+
29
+ ### CLI:
30
+ ```bash
31
+ llama-cli --hf-repo qingy2024/GRMR-3B-Instruct-v2-Q8_0-GGUF --hf-file grmr-3b-instruct-v2-q8_0.gguf -p "The meaning to life and the universe is"
32
+ ```
33
+
34
+ ### Server:
35
+ ```bash
36
+ llama-server --hf-repo qingy2024/GRMR-3B-Instruct-v2-Q8_0-GGUF --hf-file grmr-3b-instruct-v2-q8_0.gguf -c 2048
37
+ ```
38
+
39
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
40
+
41
+ Step 1: Clone llama.cpp from GitHub.
42
+ ```
43
+ git clone https://github.com/ggerganov/llama.cpp
44
+ ```
45
+
46
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
47
+ ```
48
+ cd llama.cpp && LLAMA_CURL=1 make
49
+ ```
50
+
51
+ Step 3: Run inference through the main binary.
52
+ ```
53
+ ./llama-cli --hf-repo qingy2024/GRMR-3B-Instruct-v2-Q8_0-GGUF --hf-file grmr-3b-instruct-v2-q8_0.gguf -p "The meaning to life and the universe is"
54
+ ```
55
+ or
56
+ ```
57
+ ./llama-server --hf-repo qingy2024/GRMR-3B-Instruct-v2-Q8_0-GGUF --hf-file grmr-3b-instruct-v2-q8_0.gguf -c 2048
58
+ ```