sometimesanotion
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -13,6 +13,7 @@ base_model:
|
|
13 |
- Krystalan/DRT-o1-14B
|
14 |
- underwoods/medius-erebus-magnum-14b
|
15 |
- sometimesanotion/Abliterate-Qwenvergence
|
|
|
16 |
metrics:
|
17 |
- accuracy
|
18 |
pipeline_tag: text-generation
|
@@ -43,7 +44,7 @@ This model was made in two branches: a della_linear merge, and a sequence of mo
|
|
43 |
|
44 |
### Configuration
|
45 |
|
46 |
-
This model was made in two branches: a della_linear merge, and a sequence of model_stock and then breadcrumbs. They were finalized SLERP-
|
47 |
|
48 |
Provided this release candidate's results validate its methods, I will have an eye-popping merge sequence to share that's several times more detailed than anything previous. Otherwise, I'd rather not lead anyone to waste their compute.
|
49 |
|
@@ -93,4 +94,4 @@ slices:
|
|
93 |
- model: sometimesanotion/lamarck-14b-converge-breadcrumbs
|
94 |
layer_range: [ 40, 48 ]
|
95 |
|
96 |
-
```
|
|
|
13 |
- Krystalan/DRT-o1-14B
|
14 |
- underwoods/medius-erebus-magnum-14b
|
15 |
- sometimesanotion/Abliterate-Qwenvergence
|
16 |
+
- huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
|
17 |
metrics:
|
18 |
- accuracy
|
19 |
pipeline_tag: text-generation
|
|
|
44 |
|
45 |
### Configuration
|
46 |
|
47 |
+
This model was made in two branches: a della_linear merge, and a sequence of model_stock and then breadcrumbs. Most models underwent LoRA merges to help maintain IFEVAL. They were finalized with SLERP-merge below.
|
48 |
|
49 |
Provided this release candidate's results validate its methods, I will have an eye-popping merge sequence to share that's several times more detailed than anything previous. Otherwise, I'd rather not lead anyone to waste their compute.
|
50 |
|
|
|
94 |
- model: sometimesanotion/lamarck-14b-converge-breadcrumbs
|
95 |
layer_range: [ 40, 48 ]
|
96 |
|
97 |
+
```
|