avemio-digital commited on
Commit
a245d51
verified
1 Parent(s): 182e8f6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -11
README.md CHANGED
@@ -1,15 +1,15 @@
1
  ---
2
  license: llama3.1
3
  datasets:
4
- - avemio/GRAG-CPT-HESSIAN-AI
5
- - avemio/GRAG-SFT-ShareGPT-HESSIAN-AI
6
- - avemio/GRAG-ORPO-ShareGPT-HESSIAN-AI
7
  - VAGOsolutions/SauerkrautLM-Fermented-GER-DPO
8
  - VAGOsolutions/SauerkrautLM-Fermented-Irrelevance-GER-DPO
9
  language:
10
  - en
11
  - de
12
- base_model: avemio/GRAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI
13
  pipeline_tag: question-answering
14
  tags:
15
  - German
@@ -22,9 +22,9 @@ tags:
22
  - gguf-my-repo
23
  ---
24
 
25
- # avemio-digital/GRAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI-Q8_0-GGUF
26
- This model was converted to GGUF format from [`avemio/GRAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI`](https://huggingface.co/avemio/GRAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
27
- Refer to the [original model card](https://huggingface.co/avemio/GRAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI) for more details on the model.
28
 
29
  ## Use with llama.cpp
30
  Install llama.cpp through brew (works on Mac and Linux)
@@ -37,12 +37,12 @@ Invoke the llama.cpp server or the CLI.
37
 
38
  ### CLI:
39
  ```bash
40
- llama-cli --hf-repo avemio-digital/GRAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI-Q8_0-GGUF --hf-file grag-llama-3.1-8b-orpo-hessian-ai-q8_0.gguf -p "The meaning to life and the universe is"
41
  ```
42
 
43
  ### Server:
44
  ```bash
45
- llama-server --hf-repo avemio-digital/GRAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI-Q8_0-GGUF --hf-file grag-llama-3.1-8b-orpo-hessian-ai-q8_0.gguf -c 2048
46
  ```
47
 
48
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
@@ -59,9 +59,9 @@ cd llama.cpp && LLAMA_CURL=1 make
59
 
60
  Step 3: Run inference through the main binary.
61
  ```
62
- ./llama-cli --hf-repo avemio-digital/GRAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI-Q8_0-GGUF --hf-file grag-llama-3.1-8b-orpo-hessian-ai-q8_0.gguf -p "The meaning to life and the universe is"
63
  ```
64
  or
65
  ```
66
- ./llama-server --hf-repo avemio-digital/GRAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI-Q8_0-GGUF --hf-file grag-llama-3.1-8b-orpo-hessian-ai-q8_0.gguf -c 2048
67
  ```
 
1
  ---
2
  license: llama3.1
3
  datasets:
4
+ - avemio/German-RAG-CPT-HESSIAN-AI
5
+ - avemio/German-RAG-SFT-ShareGPT-HESSIAN-AI
6
+ - avemio/German-RAG-ORPO-ShareGPT-HESSIAN-AI
7
  - VAGOsolutions/SauerkrautLM-Fermented-GER-DPO
8
  - VAGOsolutions/SauerkrautLM-Fermented-Irrelevance-GER-DPO
9
  language:
10
  - en
11
  - de
12
+ base_model: avemio/German-RAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI
13
  pipeline_tag: question-answering
14
  tags:
15
  - German
 
22
  - gguf-my-repo
23
  ---
24
 
25
+ # avemio-digital/German-RAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI-Q8_0-GGUF
26
+ This model was converted to GGUF format from [`avemio/German-RAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI`](https://huggingface.co/avemio/German-RAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
27
+ Refer to the [original model card](https://huggingface.co/avemio/German-RAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI) for more details on the model.
28
 
29
  ## Use with llama.cpp
30
  Install llama.cpp through brew (works on Mac and Linux)
 
37
 
38
  ### CLI:
39
  ```bash
40
+ llama-cli --hf-repo avemio-digital/German-RAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI-Q8_0-GGUF --hf-file German-RAG-llama-3.1-8b-orpo-hessian-ai-q8_0.gguf -p "The meaning to life and the universe is"
41
  ```
42
 
43
  ### Server:
44
  ```bash
45
+ llama-server --hf-repo avemio-digital/German-RAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI-Q8_0-GGUF --hf-file German-RAG-llama-3.1-8b-orpo-hessian-ai-q8_0.gguf -c 2048
46
  ```
47
 
48
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
59
 
60
  Step 3: Run inference through the main binary.
61
  ```
62
+ ./llama-cli --hf-repo avemio-digital/German-RAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI-Q8_0-GGUF --hf-file German-RAG-llama-3.1-8b-orpo-hessian-ai-q8_0.gguf -p "The meaning to life and the universe is"
63
  ```
64
  or
65
  ```
66
+ ./llama-server --hf-repo avemio-digital/German-RAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI-Q8_0-GGUF --hf-file German-RAG-llama-3.1-8b-orpo-hessian-ai-q8_0.gguf -c 2048
67
  ```