Edit model card
  e88 88e                               d8     
 d888 888b  8888 8888  ,"Y88b 888 8e   d88     
C8888 8888D 8888 8888 "8" 888 888 88b d88888   
 Y888 888P  Y888 888P ,ee 888 888 888  888     
  "88 88"    "88 88"  "88 888 888 888  888     
      b                                        
      8b,                                      
 
  e88'Y88                  d8           888    
 d888  'Y  ,"Y88b 888,8,  d88    ,e e,  888    
C8888     "8" 888 888 "  d88888 d88 88b 888    
 Y888  ,d ,ee 888 888     888   888   , 888    
  "88,d88 "88 888 888     888    "YeeP" 888    
                                               
PROUDLY PRESENTS         

Dendrite-L3-10B-exl2-rpcal

Quantized using 200 samples of 8192 tokens from an RP-oriented PIPPA dataset.

Branches:

  • main -- measurement.json
  • 8b8h -- 8bpw, 8bit lm_head
  • 6b6h -- 6bpw, 6bit lm_head
  • 4b6h -- 4bpw, 6bit lm_head

Original model link: Envoid/Dendrite-L3-10B

Original model README below.


This model is experimental and thus results cannot be gauranteed.

Dendrite-L3-10B

In a similar vein to Libra-19B this model was created by taking all of the layers of one model and stacking along with them the first number of layers (8 in this case) from a donor model but in the reverse order.

In this case the base model used was Poppy_Porpoise-DADA-8B and the donor model used was Llama-3-8B-Instruct-DADA

It was then finetuned for 10 epochs on the Dendrite dataset at a low learning rate to repair the disorder and integrate the donor layers.

The following mergekit config was used:

slices:
  - sources:
    - model: ./Poppy_Porpoise-DADA-8B
      layer_range: [0, 32]
  - sources:
    - model: ./Llama-3-8B-Instruct-DADA
      layer_range: [7, 8]
  - sources:
    - model: ./Llama-3-8B-Instruct-DADA
      layer_range: [6, 7]
  - sources:
    - model: ./Llama-3-8B-Instruct-DADA
      layer_range: [5, 6]
  - sources:
    - model: ./Llama-3-8B-Instruct-DADA
      layer_range: [4, 5]
  - sources:
    - model: ./Llama-3-8B-Instruct-DADA
      layer_range: [3, 4]
  - sources:
    - model: ./Llama-3-8B-Instruct-DADA
      layer_range: [2, 3]
  - sources:
    - model: ./Llama-3-8B-Instruct-DADA
      layer_range: [1, 2]
  - sources:
    - model: ./Llama-3-8B-Instruct-DADA
      layer_range: [0, 1]
merge_method: passthrough
dtype: float16

Unlike in the case of Libra-19B this models moral alignment seems very much intact.

In order to get the best results from this model you should uncheck "skip special tokens" on your front-end and add "<|eot_id|>" to your custom stopping strings.

It has been tested with a number of different Llama-3 prompt templates and seems to work well.

It regained its base assistant personality during the retraining process, however, using assistant style prompt templates and assistant cards in SillyTavern gives it fairly interesting replies.

It has been tested in RP, assistant and creative writing use cases and at a quick glance seems to work well.

Training was done using qlora-pipe

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .