matrixportal commited on
Commit
51ea599
·
verified ·
1 Parent(s): 289cfc2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +94 -0
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gemma
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ extra_gated_heading: Access Gemma on Hugging Face
6
+ extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
7
+ agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
8
+ Face and click below. Requests are processed immediately.
9
+ extra_gated_button_content: Acknowledge license
10
+ tags:
11
+ - conversational
12
+ - llama-cpp
13
+ - gguf-my-repo
14
+ base_model: WiroAI/wiroai-turkish-llm-9b
15
+ language:
16
+ - tr
17
+ model-index:
18
+ - name: wiroai-turkish-llm-9b
19
+ results:
20
+ - task:
21
+ type: multiple-choice
22
+ dataset:
23
+ name: MMLU_TR_V0.2
24
+ type: multiple-choice
25
+ metrics:
26
+ - type: 5-shot
27
+ value: 0.5982
28
+ name: 5-shot
29
+ verified: false
30
+ - type: 0-shot
31
+ value: 0.4991
32
+ name: 0-shot
33
+ verified: false
34
+ - type: 25-shot
35
+ value: 0.5367
36
+ name: 25-shot
37
+ verified: false
38
+ - type: 10-shot
39
+ value: 0.5701
40
+ name: 10-shot
41
+ verified: false
42
+ - type: 5-shot
43
+ value: 0.6682
44
+ name: 5-shot
45
+ verified: false
46
+ - type: 5-shot
47
+ value: 0.6058
48
+ name: 5-shot
49
+ verified: false
50
+ ---
51
+
52
+ # matrixportal/wiroai-turkish-llm-9b-Q4_K_S-GGUF
53
+ This model was converted to GGUF format from [`WiroAI/wiroai-turkish-llm-9b`](https://huggingface.co/WiroAI/wiroai-turkish-llm-9b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
54
+ Refer to the [original model card](https://huggingface.co/WiroAI/wiroai-turkish-llm-9b) for more details on the model.
55
+
56
+ ## Use with llama.cpp
57
+ Install llama.cpp through brew (works on Mac and Linux)
58
+
59
+ ```bash
60
+ brew install llama.cpp
61
+
62
+ ```
63
+ Invoke the llama.cpp server or the CLI.
64
+
65
+ ### CLI:
66
+ ```bash
67
+ llama-cli --hf-repo matrixportal/wiroai-turkish-llm-9b-Q4_K_S-GGUF --hf-file wiroai-turkish-llm-9b-q4_k_s.gguf -p "The meaning to life and the universe is"
68
+ ```
69
+
70
+ ### Server:
71
+ ```bash
72
+ llama-server --hf-repo matrixportal/wiroai-turkish-llm-9b-Q4_K_S-GGUF --hf-file wiroai-turkish-llm-9b-q4_k_s.gguf -c 2048
73
+ ```
74
+
75
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
76
+
77
+ Step 1: Clone llama.cpp from GitHub.
78
+ ```
79
+ git clone https://github.com/ggerganov/llama.cpp
80
+ ```
81
+
82
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
83
+ ```
84
+ cd llama.cpp && LLAMA_CURL=1 make
85
+ ```
86
+
87
+ Step 3: Run inference through the main binary.
88
+ ```
89
+ ./llama-cli --hf-repo matrixportal/wiroai-turkish-llm-9b-Q4_K_S-GGUF --hf-file wiroai-turkish-llm-9b-q4_k_s.gguf -p "The meaning to life and the universe is"
90
+ ```
91
+ or
92
+ ```
93
+ ./llama-server --hf-repo matrixportal/wiroai-turkish-llm-9b-Q4_K_S-GGUF --hf-file wiroai-turkish-llm-9b-q4_k_s.gguf -c 2048
94
+ ```