Mistral-7B and airoboros-mistral2.2-7b glued together and finetuned with qlora of Pippa and LimaRPv3 dataset.
Description
This repo contains fp16 files of Mistral-11B-Airoboros-RP-v1.
Model used
- Mistral-7B-v0.1
- airoboros-mistral2.2-7b
- PIPPA dataset 11B qlora
- LimaRPv3 dataset 11B qlora
Prompt template: Alpaca or default
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
USER: <prompt>
ASSISTANT:
The secret sauce
slices:
- sources:
- model: mistralai/Mistral-7B-v0.1
layer_range: [0, 24]
- sources:
- model: teknium/airoboros-mistral2.2-7b
layer_range: [8, 32]
merge_method: passthrough
dtype: float16
Special thanks to Sushi.
If you want to support me, you can here.
- Downloads last month
- 17
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.