This quant was made for and by Infermatic.ai

Steelskull/Q2.5-MS-Mistoria-72b-v2

Copy of the original card

Q2.5-MS-Mistoria-72b-v2 Data Card

Q2.5-MS-Mistoria-72b-v2

Now the cute anime girl has your attention

Creator: SteelSkull

About Mistoria-72b-v2:

Name Legend:
Q2.5 = Qwen 2.5
MS = Model Stock
72B = its 72B
v2 = its the second version
      

This model is my second attempt at a 72b model, as usual, my goal is to merge the robust storytelling of mutiple models while attempting to maintain intelligence.

Use qwen format

Quants: (List of badasses)

GGUF Quant:

- bartowski: Combined-GGUF

- mradermacher: GGUF // Imat-GGUF

Config:

MODEL_NAME = "Q2.5-MS-Mistoria-72b-v2"
base_model: Nexusflow/Athene-V2-Chat
merge_method: model_stock
dtype: bfloat16
models:
  - model: EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2
  - model: ZeusLabs/Chronos-Platinum-72B
  - model: shuttleai/shuttle-3

If you wish to support:

Downloads last month
15
Safetensors
Model size
72.7B params
Tensor type
BF16
·
F8_E4M3
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Infermatic/Q2.5-MS-Mistoria-72b-v2-FP8-Dynamic