Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

A merge of the following models:

TheBloke_Mistral-7B-Claude-Chat-GPTQ
TheBloke_airoboros-mistral2.2-7B-GPTQ
TheBloke_ANIMA-Phi-Neptune-Mistral-7B-GPTQ
TheBloke_Arithmo-Mistral-7B-GPTQ
TheBloke_AshhLimaRP-Mistral-7B-GPTQ
TheBloke_Astrid-Mistral-7B-GPTQ
TheBloke_Autolycus-Mistral_7B-GPTQ
TheBloke_Barcenas-Mistral-7B-GPTQ
TheBloke_blossom-v3-mistral-7B-GPTQ
TheBloke_CollectiveCognition-v1.1-Mistral-7B-GPTQ
TheBloke_dolphin-2.2.1-mistral-7B-GPTQ
TheBloke_Free_Sydney_V2_Mistral_7b-GPTQ
TheBloke_Generate_Question_Mistral_7B-GPTQ
TheBloke_Hermes-Trismegistus-Mistral-7B-GPTQ
TheBloke_Karen_TheEditor_V2_CREATIVE_Mistral_7B-GPTQ
TheBloke_Kimiko-Mistral-7B-GPTQ
TheBloke_Leo-Mistral-Hessianai-7B-Chat-GPTQ
TheBloke_MetaMath-Mistral-7B-GPTQ
TheBloke_Mistral-7B-AEZAKMI-v1-GPTQ
TheBloke_mistral-7B-dpo-v5-GPTQ
TheBloke_Mistral-7B-OpenOrca-GPTQ
TheBloke_Mistral-ClaudeLimaRP-v3-7B-GPTQ
TheBloke_Mistral-Trismegistus-7B-GPTQ
TheBloke_MistralLite-7B-GPTQ
TheBloke_mistral_7b_norobots-GPTQ
TheBloke_NeuralHermes-2.5-Mistral-7B-GPTQ
TheBloke_openbuddy-mistral-7B-v13.1-GPTQ
TheBloke_OpenHermes-2.5-Mistral-7B-GPTQ
TheBloke_openinstruct-mistral-7B-GPTQ
TheBloke_PiVoT-10.7B-Mistral-v0.2-RP-GPTQ
TheBloke_saiga_mistral_7b-GPTQ
TheBloke_samantha-1.2-mistral-7B-GPTQ
TheBloke_SauerkrautLM-7B-v1-mistral-GPTQ
TheBloke_SlimOpenOrca-Mistral-7B-GPTQ
TheBloke_speechless-code-mistral-7B-v1.0-GPTQ
TheBloke_Thespis-Mistral-7B-v0.6-GPTQ
TheBloke_Writing_Partner_Mistral_7B-GPTQ

The method used was to select each value that had the smallest sum of relative absolute difference.

The config files are copies from the TheBloke_Mistral-7B-Claude-Chat-GPTQ repository.

Downloads last month
7
Safetensors
Model size
1.2B params
Tensor type
I32
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.