Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ datasets:
|
|
14 |
|
15 |
A model created with the goal of a synergistic combination of different techniques used for SOTA models such as Evol-Instruct, Orca, Platypus, Lamini, FLASK and others, all into one lean holistically formed dataset and model. The example seeds are largely based on highly rated datasets like Airoboros, EverythingLM, GPTeacher and more, as well as being supplemented with certain multi-turn datasets like Dove(A successor to Puffin).
|
16 |
|
17 |
-
Entirely contained within 20K training examples!
|
18 |
|
19 |
## Process of creation and special thank yous!
|
20 |
|
@@ -108,6 +108,4 @@ The following are benchmarks we checked for contamination for:
|
|
108 |
|
109 |
- GPT4All
|
110 |
|
111 |
-
## Benchmarks! COMING SOON
|
112 |
-
|
113 |
-
|
|
|
14 |
|
15 |
A model created with the goal of a synergistic combination of different techniques used for SOTA models such as Evol-Instruct, Orca, Platypus, Lamini, FLASK and others, all into one lean holistically formed dataset and model. The example seeds are largely based on highly rated datasets like Airoboros, EverythingLM, GPTeacher and more, as well as being supplemented with certain multi-turn datasets like Dove(A successor to Puffin).
|
16 |
|
17 |
+
Entirely contained within 20K training examples, mostly comprised of newly synthesized tokens never used for model training until now!
|
18 |
|
19 |
## Process of creation and special thank yous!
|
20 |
|
|
|
108 |
|
109 |
- GPT4All
|
110 |
|
111 |
+
## Benchmarks! COMING SOON
|
|
|
|