Lamarck.webp

Lamarck 14B v0.4 Qwenvergence: it's a big step up for Lamarck in terms of quality. All the same ingredients are involved as in previous releases of Lamarck; they are more effectively combined. This model features decent wit and stronger reasoning than 0.3.

Merge Details

This model was initialized from model_stock, and refined from there. No fine-tuning, use of models apart from those listed or the contents of Qwenvergence, wild parties, or sacrifices to the unnamed deities were involved.

Models Merged

Top influences: These ancestors are in the Qwenvergence model_stock, reinforced in later steps:

Prose added:

The prose quality has taken a leap, no doubt also to the way EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2, sthenno-com/miscii-14b-1028, oxyapi/oxy-1-small, and underwoods/medius-erebus-magnum-14b were applied.

Configuration

The following YAML configuration was used to finalize this model:

name:                Lamarck-14B-v0.4-Qwenvergence
merge_method:        ties
base_model:          sometimesanotion/lamarck-14b-base
tokenizer_source:    base
parameters:         
  density:           1.00
  weight:            1.00
  int8_mask:         true
  normalize:         true
  rescale:           false
models:
  - model:           merges/Qwen2.5-14B-Qwenvergence-slerp
    parameters:
      weight:        1.00
      density:       1.00
  - model:           arcee-ai/Virtuoso-Small
    parameters:
      weight:        1.00
      density:       1.00
Downloads last month
0
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sometimesanotion/Lamarck-14B-v0.4-Qwenvergence

Collection including sometimesanotion/Lamarck-14B-v0.4-Qwenvergence