Initial GGML model commit
Browse files
README.md
CHANGED
@@ -36,8 +36,6 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
36 |
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
|
37 |
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
38 |
|
39 |
-
None
|
40 |
-
|
41 |
## Repositories available
|
42 |
|
43 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GPTQ)
|
@@ -167,7 +165,7 @@ After uncensoring, deduping, and cleaning, our dataset consists of:
|
|
167 |
- 842,610 instructions of FLANv2 augmented with GPT-4 completions
|
168 |
- 2,625,353 instructions of FLANv2 augmented with GPT-3.5 completions
|
169 |
|
170 |
-
We followed the submix and system prompt distribution outlined in the Orca paper. With a few exceptions. We included all 75k of CoT in the FLAN-1m dataset rather than sampling that. Also, we found that many items were duplicated, so we removed duplicates
|
171 |
|
172 |
Then we filtered out instances of alignment, refusal, avoidance, and bias, in order to produce an uncensored model upon which can be layered your personalized alignment LoRA.
|
173 |
|
@@ -208,5 +206,7 @@ The core Dolphin Team includes:
|
|
208 |
- Special thanks to WingLian, NanoBit, Teknium for helpful advice
|
209 |
- Special thanks to EdenCoder and chirper.ai for mentorship and financial sponsorship.
|
210 |
- Special thanks to Kilkonie for his very valued mentorship.
|
211 |
-
- Thank you to Catto
|
|
|
|
|
212 |
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
|
|
|
36 |
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
|
37 |
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
38 |
|
|
|
|
|
39 |
## Repositories available
|
40 |
|
41 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GPTQ)
|
|
|
165 |
- 842,610 instructions of FLANv2 augmented with GPT-4 completions
|
166 |
- 2,625,353 instructions of FLANv2 augmented with GPT-3.5 completions
|
167 |
|
168 |
+
We followed the submix and system prompt distribution outlined in the Orca paper. With a few exceptions. We included all 75k of CoT in the FLAN-1m dataset rather than sampling that. Also, we found that many items were duplicated, so we removed duplicates.
|
169 |
|
170 |
Then we filtered out instances of alignment, refusal, avoidance, and bias, in order to produce an uncensored model upon which can be layered your personalized alignment LoRA.
|
171 |
|
|
|
206 |
- Special thanks to WingLian, NanoBit, Teknium for helpful advice
|
207 |
- Special thanks to EdenCoder and chirper.ai for mentorship and financial sponsorship.
|
208 |
- Special thanks to Kilkonie for his very valued mentorship.
|
209 |
+
- Thank you to Catto.
|
210 |
+
- Thank you to Nicolai Schleifer, financial sponsor.
|
211 |
+
- Thank you to Eric Fleming, financial sponsor.
|
212 |
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
|