sometimesanotion
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ pipeline_tag: text-generation
|
|
23 |
|
24 |
Lamarck 14B v0.6: A generalist merge focused on multi-step reasoning, prose, and multi-language ability. It is based on components that have punched above their weight in the 14 billion parameter class. Here you can see a comparison between Lamarck and other top-performing merges and finetunes:
|
25 |
|
26 |
-
**Update:** Lamarck has, for the moment, taken the [#1 average score](https://shorturl.at/STz7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) for general text-generation assistant language models
|
27 |
|
28 |
![Lamarck.webp](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.6/resolve/main/comparison.png)
|
29 |
|
|
|
23 |
|
24 |
Lamarck 14B v0.6: A generalist merge focused on multi-step reasoning, prose, and multi-language ability. It is based on components that have punched above their weight in the 14 billion parameter class. Here you can see a comparison between Lamarck and other top-performing merges and finetunes:
|
25 |
|
26 |
+
**Update:** Lamarck has, for the moment, taken the [#1 average score](https://shorturl.at/STz7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) for general 14B text-generation assistant language models - or even under 32 billion parameters. Including 32 billion parameter models, as of this writing, it's currently #10. This validates the complex merge techniques which combined the complementary strengths of other work in this community into one model. A little layer analysis goes a long way.
|
27 |
|
28 |
![Lamarck.webp](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.6/resolve/main/comparison.png)
|
29 |
|