Edit model card

QuantFactory Banner

QuantFactory/Book-Gut12B-GGUF

This is quantized version of ClaudioItaly/Book-Gut12B created using llama.cpp

Original Model Card

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:


models:
  - model: nbeerbower/Stella-mistral-nemo-12B-v2
  - model: nbeerbower/mistral-nemo-gutenberg-12B-v2
merge_method: slerp
tokenizer_merge_method: slerp
tokenizer_parameters:
  t: 0.3  # Dà più peso al tokenizer 
base_model: nbeerbower/mistral-nemo-gutenberg-12B-v2
dtype: bfloat16
parameters:
  t: [0, 0.2, 0.4, 0.5, 0.4, 0.2, 0]  # Curva che favorisce leggermente 
  temp: 1.3  # Temperatura per smoothare il merge
density:  # Density merging per bilanciare le caratteristiche dei due modelli
  - threshold: 0.1
    t: 0.7
  - threshold: 0.5
    t: 0.5
  - threshold: 0.9
    t: 0.3
Downloads last month
15
GGUF
Model size
12.2B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for QuantFactory/Book-Gut12B-GGUF