Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,116 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- mistralai/Mistral-7B-Instruct-v0.2
|
4 |
+
- Endevor/InfinityRP-v1-7B
|
5 |
+
- mistralai/Mistral-7B-v0.1
|
6 |
+
- CalderaAI/Naberius-7B
|
7 |
+
- CalderaAI/Hexoteric-7B
|
8 |
+
- Endevor/EndlessRP-v3-7B
|
9 |
+
library_name: transformers
|
10 |
+
tags:
|
11 |
+
- mergekit
|
12 |
+
- merge
|
13 |
+
license: apache-2.0
|
14 |
+
pipeline_tag: text-generation
|
15 |
+
---
|
16 |
+
# merge
|
17 |
+
this is a model focused on roleplaying. please dont expect much from it in other areas. it will do its job as roleplaying.
|
18 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
19 |
+
careful it generates nsfw contents. whatever generated by you is your responsibility. ejoy it by roleplaying. cheers ☺️.
|
20 |
+
## Merge Details
|
21 |
+
### Merge Method
|
22 |
+
|
23 |
+
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
|
24 |
+
|
25 |
+
### Models Merged
|
26 |
+
|
27 |
+
The following models were included in the merge:
|
28 |
+
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
|
29 |
+
* [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B)
|
30 |
+
* [CalderaAI/Naberius-7B](https://huggingface.co/CalderaAI/Naberius-7B)
|
31 |
+
* [CalderaAI/Hexoteric-7B](https://huggingface.co/CalderaAI/Hexoteric-7B)
|
32 |
+
* [Endevor/EndlessRP-v3-7B](https://huggingface.co/Endevor/EndlessRP-v3-7B)
|
33 |
+
|
34 |
+
### Configuration
|
35 |
+
|
36 |
+
The following YAML configuration was used to produce this model:
|
37 |
+
|
38 |
+
```yaml
|
39 |
+
models:
|
40 |
+
- model: mistralai/Mistral-7B-v0.1
|
41 |
+
#no parameters necessary for base model
|
42 |
+
- model: mistralai/Mistral-7B-Instruct-v0.2
|
43 |
+
parameters:
|
44 |
+
density: 0.6
|
45 |
+
weight: 0.25
|
46 |
+
- model: Endevor/InfinityRP-v1-7B
|
47 |
+
parameters:
|
48 |
+
density: 0.6
|
49 |
+
weight: 0.25
|
50 |
+
- model: Endevor/EndlessRP-v3-7B
|
51 |
+
parameters:
|
52 |
+
density: 0.6
|
53 |
+
weight: 0.25
|
54 |
+
- model: CalderaAI/Naberius-7B
|
55 |
+
parameters:
|
56 |
+
density: 0.6
|
57 |
+
weight: 0.25
|
58 |
+
- model: CalderaAI/Hexoteric-7B
|
59 |
+
parameters:
|
60 |
+
density: 0.6
|
61 |
+
weight: 0.25
|
62 |
+
merge_method: ties
|
63 |
+
base_model: mistralai/Mistral-7B-v0.1
|
64 |
+
parameters:
|
65 |
+
normalize: false
|
66 |
+
int8_mask: true
|
67 |
+
dtype: float16
|
68 |
+
```
|
69 |
+
### download
|
70 |
+
dowanlod any of one file not all of them.
|
71 |
+
|
72 |
+
### About GGUF
|
73 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
74 |
+
|
75 |
+
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
76 |
+
|
77 |
+
llama.cpp. The source project for GGUF. Offers a CLI and a server option.
|
78 |
+
text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
79 |
+
KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
80 |
+
GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
|
81 |
+
LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
|
82 |
+
LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
83 |
+
Faraday.dev, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
84 |
+
llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
85 |
+
candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
86 |
+
ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
|
87 |
+
|
88 |
+
### info
|
89 |
+
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
90 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
91 |
+
| [Q2_K.gguf)] | Q2_K | 2 | 2.72 GB| 5.22 GB | significant quality loss - not recommended for most purposes |
|
92 |
+
| [Q3_K_S.gguf)] | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
|
93 |
+
| [Q3_K_M.gguf)] | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
|
94 |
+
| [Q3_K_L.gguf)] | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
|
95 |
+
| [Q4_0.gguf)] |Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
96 |
+
| [Q4_K_S.gguf)] | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
|
97 |
+
| [Q4_K_M.gguf)] | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
|
98 |
+
| [Q5_0.gguf)] | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
99 |
+
| [Q5_K_S.gguf) ]| Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
|
100 |
+
| [Q5_K_M.gguf) ]| Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
|
101 |
+
| [Q6_K.gguf)] | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
|
102 |
+
| [Q8_0.gguf)] | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
|
103 |
+
|
104 |
+
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
105 |
+
[note this info format is borrowed from @TheBloke (https://huggingface.co/TheBloke) ]
|
106 |
+
### citation
|
107 |
+
this repo has been used to make the merge.
|
108 |
+
|
109 |
+
```
|
110 |
+
@article{goddard2024arcee,
|
111 |
+
title={Arcee's MergeKit: A Toolkit for Merging Large Language Models},
|
112 |
+
author={Goddard, Charles and Siriwardhana, Shamane and Ehghaghi, Malikeh and Meyers, Luke and Karpukhin, Vlad and Benedict, Brian and McQuade, Mark and Solawetz, Jacob},
|
113 |
+
journal={arXiv preprint arXiv:2403.13257},
|
114 |
+
year={2024}
|
115 |
+
}
|
116 |
+
```
|