Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
---
|
2 |
license: wtfpl
|
3 |
datasets:
|
4 |
-
-
|
5 |
pipeline_tag: text-generation
|
6 |
---
|
7 |
|
@@ -22,8 +22,20 @@ with an efficient hardware-aware design and implementation in the spirit of [Fla
|
|
22 |
|
23 |
## Dataset info
|
24 |
|
25 |
-
|
26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
28 |
## Usage
|
29 |
|
|
|
1 |
---
|
2 |
license: wtfpl
|
3 |
datasets:
|
4 |
+
- teknium/openhermes
|
5 |
pipeline_tag: text-generation
|
6 |
---
|
7 |
|
|
|
22 |
|
23 |
## Dataset info
|
24 |
|
25 |
+
The OpenHermes dataset is composed of 242,000 entries of primarily GPT-4 generated data, from open datasets across the AI landscape, including:
|
26 |
|
27 |
+
OpenHermes 13B is the first fine tune of the Hermes dataset that has a fully open source dataset!
|
28 |
+
|
29 |
+
OpenHermes was trained on 242,000 entries of primarily GPT-4 generated data, from open datasets across the AI landscape, including:
|
30 |
+
|
31 |
+
- GPTeacher - General Instruct, Roleplay v1, Roleplay v2, and Code Instruct Datasets, by Teknium
|
32 |
+
- WizardLM (v1, evol_instruct 70k), by WizardLM Team/nlpxucan
|
33 |
+
- Airoboros GPT-4 (v1.0), by JonDurbin
|
34 |
+
- Camel-AI's domain expert datasets, by the Camel-AI Team
|
35 |
+
- CodeAlpaca, by Sahil2801
|
36 |
+
- GPT4-LLM and Unnatural Instructions, by Microsoft
|
37 |
+
Filtering included removal of OpenAI refusals, disclaimers, and "As an AI" type examples and more
|
38 |
+
The base dataset mix is identical to the original Nous-Hermes', minus the Nous-Instruct and PDACTL datasets which were private datasets.
|
39 |
|
40 |
## Usage
|
41 |
|