MaziyarPanahi
commited on
Upload folder using huggingface_hub (#1)
Browse files- 9ba49a05234605231927b829b5d399a67c161c1aff2ccc230fe80e545b675a3c (c9e4f5e5c123988119833055db20d1bc4a3ed580)
- c6e6999c9100bfbcc4d0f6ce4fa57fa8c841e92b7717d3dd41363b588ed82e36 (07fa6eb9c34ddb221231e1f8f14dd6f6bf9f07cb)
- 5ee33e71d996c435d2daed0ea3611648704f8566d78029e0c3acd01a5d93ec25 (ce4b825811c4e4f6daac0703cd020dcafe4d6d6c)
- 27b0b3c01b23fe242e59dbf10493bd5ef9d04c4ae7b3a280587994cce3b2aa00 (80e29cfcb934adf75747c43319e62b13055ff89e)
- de4b47f2b27887bec16f664b53c4bc147edc5bdb60cd97d43bcea4c99a59c122 (b6bfb97baa2771cb22e85819931fbced66602736)
- f7b3b3b1ee4d265b51e2f62e5d78ecf7b44b34d0b6f811c9d8370ca60bee853c (957e8ccb4a9cfcc4a69ccb407e65d52d34090d6a)
- .gitattributes +6 -0
- NekoMix-12B-GGUF_imatrix.dat +3 -0
- NekoMix-12B.Q5_K_M.gguf +3 -0
- NekoMix-12B.Q5_K_S.gguf +3 -0
- NekoMix-12B.Q6_K.gguf +3 -0
- NekoMix-12B.Q8_0.gguf +3 -0
- NekoMix-12B.fp16.gguf +3 -0
- README.md +45 -0
.gitattributes
CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
NekoMix-12B.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
NekoMix-12B.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
NekoMix-12B.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
NekoMix-12B.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
NekoMix-12B.fp16.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
NekoMix-12B-GGUF_imatrix.dat filter=lfs diff=lfs merge=lfs -text
|
NekoMix-12B-GGUF_imatrix.dat
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d64f0a4e0037068614361c592b2410c499d527910e6acb8ce52a9e16235ed2ca
|
3 |
+
size 7054394
|
NekoMix-12B.Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0d720a96200da02860d7ff5bc87c3dd977606cf96e6a8cdcb8e6199fa66c268e
|
3 |
+
size 8727632256
|
NekoMix-12B.Q5_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b15c31496ccb1581e36c48fb085b19f15bbfc0cae22dab9a70393015f6753adf
|
3 |
+
size 8518736256
|
NekoMix-12B.Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:521975baa2d6f61d48a4716536ea4c2fd5b3d2e766225942adf58b9824cccc08
|
3 |
+
size 10056210816
|
NekoMix-12B.Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:612a26dca49809aa0300294e1312d03a753e42dd4a01b3c77641f43036b28092
|
3 |
+
size 13022370176
|
NekoMix-12B.fp16.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e96ab846ba72e24fab3bc8564e1e47b3f2d7e1e720033318ce76cc19c8f7fa65
|
3 |
+
size 24504277184
|
README.md
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: Moraliane/NekoMix-12B
|
3 |
+
inference: false
|
4 |
+
model_creator: Moraliane
|
5 |
+
model_name: NekoMix-12B-GGUF
|
6 |
+
pipeline_tag: text-generation
|
7 |
+
quantized_by: MaziyarPanahi
|
8 |
+
tags:
|
9 |
+
- quantized
|
10 |
+
- 2-bit
|
11 |
+
- 3-bit
|
12 |
+
- 4-bit
|
13 |
+
- 5-bit
|
14 |
+
- 6-bit
|
15 |
+
- 8-bit
|
16 |
+
- GGUF
|
17 |
+
- text-generation
|
18 |
+
---
|
19 |
+
# [MaziyarPanahi/NekoMix-12B-GGUF](https://huggingface.co/MaziyarPanahi/NekoMix-12B-GGUF)
|
20 |
+
- Model creator: [Moraliane](https://huggingface.co/Moraliane)
|
21 |
+
- Original model: [Moraliane/NekoMix-12B](https://huggingface.co/Moraliane/NekoMix-12B)
|
22 |
+
|
23 |
+
## Description
|
24 |
+
[MaziyarPanahi/NekoMix-12B-GGUF](https://huggingface.co/MaziyarPanahi/NekoMix-12B-GGUF) contains GGUF format model files for [Moraliane/NekoMix-12B](https://huggingface.co/Moraliane/NekoMix-12B).
|
25 |
+
|
26 |
+
### About GGUF
|
27 |
+
|
28 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
29 |
+
|
30 |
+
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
31 |
+
|
32 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
33 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
34 |
+
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
|
35 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
36 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
37 |
+
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
|
38 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
39 |
+
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
40 |
+
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
41 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
|
42 |
+
|
43 |
+
## Special thanks
|
44 |
+
|
45 |
+
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|