|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
--- |
|
|
|
# Flux-Base-Optimized |
|
|
|
`flux-base-optimized` is the base model for finetuning the series of `flux-7b` models. |
|
It is hierarchical SLERP merged from the following models |
|
|
|
* mistralai/Mistral-7B-v0.1 (Apache 2.0) |
|
* teknium/OpenHermes-2.5-Mistral-7B (Apache 2.0) |
|
* Intel/neural-chat-7b-v3-3 (Apache 2.0) |
|
* meta-math/MetaMath-Mistral-7B (Apache 2.0) |
|
* openchat/openchat-3.5-0106 was openchat/openchat-3.5-1210 (Apache 2.0) |
|
|
|
Here's how we did the hierarchical SLERP merge. |
|
``` |
|
[flux-base-optimized] |
|
β |
|
| |
|
[stage-1]-+-[openchat] |
|
β |
|
| |
|
[stage-0]-+-[meta-math] |
|
β |
|
| |
|
[openhermes]-+-[neural-chat] |
|
``` |
|
|
|
|