A newer version of this model is available: sometimesanotion/Lamarck-14B-v0.4-Qwenvergence

Lamarck.webp

merge

Lamarck-14B is the product of a multi-stage merge which emphasizes arcee-ai/Virtuoso-Small in early and finishing layers, and midway features strong emphasis on reasoning, and ends balanced somewhat towards Virtuoso again.

For GGUFs, mradermacher/Lamarck-14B-v0.3-i1-GGUF has you covered. Thank you @mradermacher!

The merge strategy of Lamarck 0.3 can be summarized as:

  • Two model_stocks commence specialized branches for reasoning and prose quality.
  • For refinement on both model_stocks, DELLA merges re-emphasize selected ancestors.
  • For smooth instruction following, a SLERP merges Virtuoso with a DELLA merge of the two branches, where reason vs. prose quality are balanced.
  • For finalization and normalization, a TIES merge.

graph.png

Thanks go to:

  • @arcee-ai's team for the ever-capable mergekit, and the exceptional Virtuoso Small model.
  • @CultriX for the helpful examples of memory-efficient sliced merges and evolutionary merging. Their contribution of tinyevals on version 0.1 of Lamarck did much to validate the hypotheses of the DELLA->SLERP gradient process used here.
  • The authors behind the capable models that appear in the model_stock.

Models Merged

Top influences: These ancestors are base models and present in the model_stocks, but are heavily re-emphasized in the DELLA and SLERP merges.

  • arcee-ai/Virtuoso-Small - A brand new model from Arcee, refined from the notable cross-architecture Llama-to-Qwen distillation arcee-ai/SuperNova-Medius. The first two layers are nearly exclusively from Virtuoso. It has proven to be a well-rounded performer, and contributes a noticeable boost to the model's prose quality.

  • CultriX/SeQwence-14B-EvolMerge - A top contender on reasoning benchmarks.

Reason: While Virtuoso is the strongest influence the starting ending layers, the reasoning mo

Prose: While the prose module is gently applied, its impact is noticeable on Lamarck 0.3's prose quality, and a DELLA merge re-emphasizes the contributions of two models particularly:

Model stock: Two model_stock merges, specialized for specific aspects of performance, are used to mildly influence a large range of the model.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 36.58
IFEval (0-Shot) 50.32
BBH (3-Shot) 51.27
MATH Lvl 5 (4-Shot) 32.40
GPQA (0-shot) 18.46
MuSR (0-shot) 18.00
MMLU-PRO (5-shot) 49.01
Downloads last month
81
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sometimesanotion/Lamarck-14B-v0.3

Collection including sometimesanotion/Lamarck-14B-v0.3

Evaluation results