Update README.md
Browse files
README.md
CHANGED
@@ -12,4 +12,36 @@
|
|
12 |
|
13 |
Haary/haryra-7B-gguf adalah Model LLM Bahasa Indonesia
|
14 |
|
15 |
-
Model [Haary/haryra-7b-id](https://huggingface.co/Haary/haryra-7b-id) adalah Model terkuantisasi dari Model Dasar [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) ke format GGUF
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
|
13 |
Haary/haryra-7B-gguf adalah Model LLM Bahasa Indonesia
|
14 |
|
15 |
+
Model [Haary/haryra-7b-id](https://huggingface.co/Haary/haryra-7b-id) adalah Model terkuantisasi dari Model Dasar [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) ke format GGUF.
|
16 |
+
|
17 |
+
## Cara menjalankan dengan kode Python
|
18 |
+
|
19 |
+
Anda dapat menggunakan model GGUF dari Python menggunakan [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) atau [ctransformers](https://github.com/marella/ctransformers) librari.
|
20 |
+
|
21 |
+
### Cara memuat model ini dalam kode Python, menggunakan ctransformers
|
22 |
+
|
23 |
+
#### Pertama instal package ctransformers
|
24 |
+
|
25 |
+
Jalankan salah satu perintah berikut, sesuai dengan sistem Anda:
|
26 |
+
|
27 |
+
```shell
|
28 |
+
# Base ctransformers with no GPU acceleration
|
29 |
+
pip install ctransformers
|
30 |
+
# Or with CUDA GPU acceleration
|
31 |
+
pip install ctransformers[cuda]
|
32 |
+
# Or with AMD ROCm GPU acceleration (Linux only)
|
33 |
+
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
|
34 |
+
# Or with Metal GPU acceleration for macOS systems only
|
35 |
+
CT_METAL=1 pip install ctransformers --no-binary ctransformers
|
36 |
+
```
|
37 |
+
|
38 |
+
#### Contoh kode sederhana untuk menjalankan ctransformers
|
39 |
+
|
40 |
+
```python
|
41 |
+
from ctransformers import AutoModelForCausalLM
|
42 |
+
|
43 |
+
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
44 |
+
llm = AutoModelForCausalLM.from_pretrained("Ichsan2895/Merak-7B-v4-GGUF", model_file="Merak-7B-v4-model-q5_k_m.gguf", model_type="mistral", gpu_layers=50)
|
45 |
+
|
46 |
+
print(llm("AI is going to"))
|
47 |
+
```
|