Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ datasets:
|
|
12 |
|
13 |
## **Nous-Capybara-7B**
|
14 |
|
15 |
-
A model created with the goal of a synergistic combination of different techniques used for SOTA models such as Evol-Instruct, Orca, Vicuna, Lamini, FLASK and others, all into one lean holistically formed dataset and model. The seed instructions used for the start of synthesized conversations are largely based on highly acclaimed datasets like Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from posts
|
16 |
|
17 |
Entirely contained within 20K training examples, mostly comprised of newly synthesized tokens never used for model training until now!
|
18 |
|
|
|
12 |
|
13 |
## **Nous-Capybara-7B**
|
14 |
|
15 |
+
A model created with the goal of a synergistic combination of different techniques used for SOTA models such as Evol-Instruct, Orca, Vicuna, Lamini, FLASK and others, all into one lean holistically formed dataset and model. The seed instructions used for the start of synthesized conversations are largely based on highly acclaimed datasets like Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from posts on the website LessWrong, as well as being supplemented with certain multi-turn datasets like Dove(A successor to Puffin).
|
16 |
|
17 |
Entirely contained within 20K training examples, mostly comprised of newly synthesized tokens never used for model training until now!
|
18 |
|