DolphinStar-12.5B / README.md
Noodlz's picture
Update README.md
ffe0fdc verified
|
raw
history blame
2.45 kB
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63cf23cffbd0cc580bc65c73/QDvxvuS3M7oHv7JI5d1ke.png)
Custom Model "Dolphin2Star1" Merged by Noodlz.
12.5B linear merged from the uncensored mistral 7B v0.2 as the base, with the fine tunes of StarlingLM 7B Beta that's originally mistral 7B v0.1
have fun =)
---
license: apache-2.0
---
---
base_model:
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
- NexusFlow/Starling-LM-7B-beta
library_name: transformers
tags:
- mergekit
- merge
---
# output_folder
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
* [NexusFlow/Starling-LM-7B-beta](https://huggingface.co/NexusFlow/Starling-LM-7B-beta)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: linear
parameters:
weight: 1.0
slices:
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [0,1]
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [0,1]
parameters:
weight: 0
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [1,8]
- sources:
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [4,12]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [8,16]
- sources:
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [12,20]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [16,24]
- sources:
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [20,28]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [24,31]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [31,32]
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [31,32]
parameters:
weight: 0
dtype: float16
tokenizer_source: model:cognitivecomputations/dolphin-2.8-mistral-7b-v02
```