RozGrov's picture
Update README.md
d8cb6aa verified
|
raw
history blame
2.23 kB
metadata
base_model:
  - RozGrov/NemoDori-v0.2-12B-MN-BT
library_name: transformers
tags:
  - mergekit
  - merge

NemoDori-v0.2-Upscaled.1-14B

This is a merge of pre-trained language models created using mergekit.

An upscaled version of NemoDori-v0.2-12B-MN-BT, now at 14B.
This should somehow makes it more clever, if not, it's just my feeling.
NemoDori v0.2 is my best merge model so far, so I want to try if upscaling (or merge using itself) does have any effect on it.

As expected, this model can use v0.1 preset and go creative up to temp 2 at least. I didn't have enough time to test it, because I made an even MORE upscaled version of v0.2, which will be made public once my work ends.

I am glad that this works out. I haven't been able to use passthrough method work, mostly the result responds with gibberish and my stupid brain thinking that doing something random will magically create what I want. It took me 3 tries to get it right, but still... I didn't know why it work now.

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
    - model: RozGrov/NemoDori-v0.2-12B-MN-BT
      layer_range: [0, 8]
  - sources:
    - model: RozGrov/NemoDori-v0.2-12B-MN-BT
      layer_range: [8, 24]
      parameters:
        scale:
          - filter: q_proj
            value: 0.919
          - filter: k_proj
            value: 0.919
          - value: 1.0
  - sources:
    - model: RozGrov/NemoDori-v0.2-12B-MN-BT
      layer_range: [16, 32]
      parameters:
        scale:
          - filter: q_proj
            value: 0.919
          - filter: k_proj
            value: 0.919
          - value: 1.0
  - sources:
    - model: RozGrov/NemoDori-v0.2-12B-MN-BT
      layer_range: [32, 40]
merge_method: passthrough
dtype: bfloat16