sometimesanotion commited on
Commit
1dfe0df
·
verified ·
1 Parent(s): 977f263

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -31,7 +31,7 @@ Previous releases were based on a SLERP merge of model_stock+della branches focu
31
 
32
  A notable contribution from the middle to upper layers of Lamarck v0.6 comes from [Krystalan/DRT-o1-14B](https://huggingface.co/Krystalan/DRT-o1-14B). It has a fascinating research paper: [DRT-o1: Optimized Deep Reasoning Translation via Long Chain-of-Thought](https://huggingface.co/papers/2412.17498).
33
 
34
- Lamarck 0.6 hit a whole new of multi-pronged merge strategies:
35
 
36
  - **Extracted LoRA adapters from special-purpose merges**
37
  - **Separate branches for breadcrumbs and DELLA merges**
 
31
 
32
  A notable contribution from the middle to upper layers of Lamarck v0.6 comes from [Krystalan/DRT-o1-14B](https://huggingface.co/Krystalan/DRT-o1-14B). It has a fascinating research paper: [DRT-o1: Optimized Deep Reasoning Translation via Long Chain-of-Thought](https://huggingface.co/papers/2412.17498).
33
 
34
+ Lamarck 0.6 hit a whole new level of toolchain-automated complexity with its multi-pronged merge strategies:
35
 
36
  - **Extracted LoRA adapters from special-purpose merges**
37
  - **Separate branches for breadcrumbs and DELLA merges**