YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Phi-5B-Test - bnb 8bits

Original model description:

base_model: [] tags: - mergekit - merge license: mit

Untitled Model (1)

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: liminerity/Phigments12
    parameters:
      density: [1, 0.7, 0.1] # density gradient
      weight: 1.0
  - model: l3utterfly/phi-2-layla-v1-chatml
    parameters:
      density: 0.8
      weight: [0, 0.5, 0.7, 1] # weight gradient
 
merge_method: dare_ties
base_model: liminerity/Phigments12
parameters:
  normalize: true
  int8_mask: true
dtype: float16
dtype: float16
merge_method: passthrough
slices:
  - sources:
    - model: phi/
      layer_range: [0,32]
  - sources:
    - model: phi/
      layer_range: [0,32]
 

Join the Replete AI Discord here!

Downloads last month
2
Safetensors
Model size
5.3B params
Tensor type
F32
FP16
I8
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.