|
--- |
|
license: unknown |
|
language: |
|
- en |
|
pipeline_tag: conversational |
|
tags: |
|
- frankenmerge |
|
- 108b |
|
--- |
|
# BigWeave v18 108b |
|
|
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/65a6db055c58475cf9e6def1/4CbbAN-X7ZWj702JrcCGH.png" width=600> |
|
|
|
The BigWeave models aim to experimentally identify merge settings for increasing model performance. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared. |
|
|
|
# Prompting Format |
|
Mistral, Vicuna and Alpaca. |
|
|
|
# Merge process |
|
This is a self-merge of 152334H/miqu-1-70b-sf. By conducting exl2 measurements, we identify the most relevant layers. The most important layers are extended with layers in-between to create longer series of consecutive layers. |
|
|
|
Merge configuration: |
|
``` |
|
slices: |
|
- sources: |
|
- model: 152334H/miqu-1-70b-sf |
|
layer_range: [0,5] |
|
- sources: |
|
- model: 152334H/miqu-1-70b-sf |
|
layer_range: [1,9] |
|
- sources: |
|
- model: 152334H/miqu-1-70b-sf |
|
layer_range: [5,33] |
|
- sources: |
|
- model: 152334H/miqu-1-70b-sf |
|
layer_range: [16,51] |
|
- sources: |
|
- model: 152334H/miqu-1-70b-sf |
|
layer_range: [34,77] |
|
- sources: |
|
- model: 152334H/miqu-1-70b-sf |
|
layer_range: [75,79] |
|
- sources: |
|
- model: 152334H/miqu-1-70b-sf |
|
layer_range: [77,80] |
|
merge_method: passthrough |
|
dtype: float16 |
|
``` |