Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,6 @@ While most of the tokens within Capybara are newly synthsized and part of datase
|
|
28 |
|
29 |
![Capybara](https://i.imgur.com/yB58OoD.jpeg)
|
30 |
|
31 |
-
|
32 |
## Model Training
|
33 |
|
34 |
Nous-Capybara 7B is a new model trained for multiple epochs on a dataset of less than 20,000 carefully curated GPT-4 examples, most of which are long context conversations between a real human and GPT-4 comprised of entirely newly synthesized tokens that previously didn't exist on HuggingFace.
|
@@ -109,4 +108,6 @@ The following are benchmarks we checked for contamination for:
|
|
109 |
|
110 |
- GPT4All
|
111 |
|
112 |
-
## Benchmarks!
|
|
|
|
|
|
28 |
|
29 |
![Capybara](https://i.imgur.com/yB58OoD.jpeg)
|
30 |
|
|
|
31 |
## Model Training
|
32 |
|
33 |
Nous-Capybara 7B is a new model trained for multiple epochs on a dataset of less than 20,000 carefully curated GPT-4 examples, most of which are long context conversations between a real human and GPT-4 comprised of entirely newly synthesized tokens that previously didn't exist on HuggingFace.
|
|
|
108 |
|
109 |
- GPT4All
|
110 |
|
111 |
+
## Benchmarks! (Important to note that all the mentioned benchmarks are in a single-turn dataset setting, Capybara should excel even further at multi-turn conversational tasks.)
|
112 |
+
|
113 |
+
![Capybara](https://i.imgur.com/n8lkmyK.png)
|