wassemgtk's picture
Upload folder using huggingface_hub
32ba794 verified
metadata
base_model:
  - codellama/CodeLlama-70b-Instruct-hf
  - cognitivecomputations/dolphin-2.9.1-llama-3-70b
  - abacusai/Smaug-Llama-3-70B-Instruct-32K
  - migtissera/Llama-3-70B-Synthia-v3.5
library_name: transformers
tags:
  - mergekit
  - merge

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the linear merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: linear # use linear so we can include multiple models, albeit at a zero weight
parameters:
  weight: 1.0 # weight everything as 1 unless specified otherwise - linear with one model weighted at 1 is a no-op like passthrough
slices:
  - sources:
      - model: cognitivecomputations/dolphin-2.9.1-llama-3-70b # embed_tokens comes along with the ride with whatever is the first layer
        layer_range: [0, 1]
      - model: migtissera/Llama-3-70B-Synthia-v3.5 # add dummy second model with 0 weight so tokenizer-based merge routine is invoked for embed_tokens
        layer_range: [0, 1]
        parameters:
          weight: 0
  - sources:
      - model: cognitivecomputations/dolphin-2.9.1-llama-3-70b
        layer_range: [1, 20]
  - sources:
      - model: migtissera/Llama-3-70B-Synthia-v3.5
        layer_range: [10, 30]
  - sources:
      - model: codellama/CodeLlama-70b-Instruct-hf
        layer_range: [20, 40]
  - sources:
      - model: abacusai/Smaug-Llama-3-70B-Instruct-32K
        layer_range: [25, 45]
  - sources:
      - model: cognitivecomputations/dolphin-2.9.1-llama-3-70b
        layer_range: [30, 50]
  - sources:
      - model: migtissera/Llama-3-70B-Synthia-v3.5
        layer_range: [40, 60]
  - sources:
      - model: codellama/CodeLlama-70b-Instruct-hf
        layer_range: [50, 70]
  - sources:
      - model: abacusai/Smaug-Llama-3-70B-Instruct-32K
        layer_range: [55, 75]
  - sources:
      - model: cognitivecomputations/dolphin-2.9.1-llama-3-70b
        layer_range: [60, 79]
  - sources: # same as above, but for lm_head with the last layer
      - model: cognitivecomputations/dolphin-2.9.1-llama-3-70b
        layer_range: [79, 80]
      - model: migtissera/Llama-3-70B-Synthia-v3.5
        layer_range: [79, 80]
        parameters:
          weight: 0
dtype: float16
tokenizer_source: model:cognitivecomputations/dolphin-2.9.1-llama-3-70b