fuzzy-mittenz commited on
Commit
555ccf5
1 Parent(s): dbd8cef

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +3 -7
README.md CHANGED
@@ -1,8 +1,6 @@
1
  ---
2
  license: apache-2.0
3
- base_model:
4
- - jeffmeloy/Qwen2.5-7B-olm-v1.0
5
- - Qwen/Qwen2.5-7B-Instruct
6
  pipeline_tag: text-generation
7
  language:
8
  - en
@@ -11,11 +9,9 @@ tags:
11
  - text-generation-inference
12
  - llama-cpp
13
  - gguf-my-repo
14
- datasets:
15
- - IntelligentEstate/The_Key
16
  ---
17
 
18
- # fuzzy-mittenz/Qwen2.5-7B-olm-v1.0-IQ4_NL-GGUF TEST QAT TRAINING 2
19
  This model was converted to GGUF format from [`jeffmeloy/Qwen2.5-7B-olm-v1.0`](https://huggingface.co/jeffmeloy/Qwen2.5-7B-olm-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
20
  Refer to the [original model card](https://huggingface.co/jeffmeloy/Qwen2.5-7B-olm-v1.0) for more details on the model.
21
 
@@ -57,4 +53,4 @@ Step 3: Run inference through the main binary.
57
  or
58
  ```
59
  ./llama-server --hf-repo fuzzy-mittenz/Qwen2.5-7B-olm-v1.0-IQ4_NL-GGUF --hf-file qwen2.5-7b-olm-v1.0-iq4_nl-imat.gguf -c 2048
60
- ```
 
1
  ---
2
  license: apache-2.0
3
+ base_model: jeffmeloy/Qwen2.5-7B-olm-v1.0
 
 
4
  pipeline_tag: text-generation
5
  language:
6
  - en
 
9
  - text-generation-inference
10
  - llama-cpp
11
  - gguf-my-repo
 
 
12
  ---
13
 
14
+ # fuzzy-mittenz/Qwen2.5-7B-olm-v1.0-IQ4_NL-GGUF
15
  This model was converted to GGUF format from [`jeffmeloy/Qwen2.5-7B-olm-v1.0`](https://huggingface.co/jeffmeloy/Qwen2.5-7B-olm-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
  Refer to the [original model card](https://huggingface.co/jeffmeloy/Qwen2.5-7B-olm-v1.0) for more details on the model.
17
 
 
53
  or
54
  ```
55
  ./llama-server --hf-repo fuzzy-mittenz/Qwen2.5-7B-olm-v1.0-IQ4_NL-GGUF --hf-file qwen2.5-7b-olm-v1.0-iq4_nl-imat.gguf -c 2048
56
+ ```