sometimesanotion
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -18,10 +18,10 @@ metrics:
|
|
18 |
- accuracy
|
19 |
pipeline_tag: text-generation
|
20 |
---
|
21 |
-
![Lamarck.webp](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.6/resolve/main/
|
22 |
---
|
23 |
|
24 |
-
**Update:** Lamarck has, for the moment, taken the #1 average score for 14 billion parameter models. Counting all the way up to 32 billion parameters, it's #7. This validates the complex merge techniques which combined the complementary strengths of other work in this community into one model.
|
25 |
|
26 |
Lamarck 14B v0.6: A generalist merge focused on multi-step reasoning, prose, and multi-language ability. It is based on components that have punched above their weight in the 14 billion parameter class. Here you can see a comparison between Lamarck and other top-performing merges and finetunes:
|
27 |
|
|
|
18 |
- accuracy
|
19 |
pipeline_tag: text-generation
|
20 |
---
|
21 |
+
![Lamarck.webp](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.6/resolve/main/Lamarck.webp)
|
22 |
---
|
23 |
|
24 |
+
**Update:** Lamarck has, for the moment, taken the #1 average score for 14 billion parameter models. Counting all the way up to 32 billion parameters, it's #7. This validates the complex merge techniques which combined the complementary strengths of other work in this community into one model.
|
25 |
|
26 |
Lamarck 14B v0.6: A generalist merge focused on multi-step reasoning, prose, and multi-language ability. It is based on components that have punched above their weight in the 14 billion parameter class. Here you can see a comparison between Lamarck and other top-performing merges and finetunes:
|
27 |
|