Edit model card

Contributors: Nicolas Mejia Petit

License

Mistral 14b: A New Base Model

The objective of this model is to serve as a fully open source new base model with 14.3 billion parameters. It has been enhanced with a LoRa adapter attached to all 62 layers of the merged model. The model is capable of generating outputs and responding accurately to inputs. However, it tends to over-respond with unasked questions when asked to process more than 512 tokens, which is its training limit using QLoRa.

With full fine-tuning, this model has the potential to deliver excellent performance.

Qlora adapter was trained on a modified dataset of airoboros-m-7b-3.1.2, using SFT. In the Alpaca Format.

Model Creation

The model was created by merging two models: Dolphin and Zephyr, along with Meta-math7b and Speechless code, to form a single model. The layers of these two models were stacked on top of each other to create this model.

Initially, the output from the model was pure jargon. To rectify this, a LoRa adapter was trained and merged across all layers.

Useful Resources

Source Models

upcoming Mistral 30b

  • We currently have a Mistral model with 29 billion parameters(29.2B params) in development. At present, the model's output is not yet refined and may appear as jargon. If there is interest in the community for fine-tuning this model, we are open to uploading it in its current state. Otherwise, we plan to complete our training process before making it available. You can let us know with a post in this repo's discussion's!
Downloads last month
1,402
Safetensors
Model size
14.2B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Vezora/Mistral-14b-Merge-Base

Quantizations
1 model