--- license: apache-2.0 ---
Inairtra-7B
Model Size: 7B
A experimental (and beginner) model merge using Intel's Neural Chat 7B
## Model Details Trained on: **Intel Xeon E5-2693v3 | NVIDIA RTX 2080 Ti | 128 GB DDR4 *(yes I'm poor :( )*** The Inairtra-7B LLM is a LLM made by Bronya Rand (bronya_rand / Bronya-Rand) as a beginning learning model to merging models using [MergeKit](https://github.com/cg123/mergekit) and GGUF quantization. This model is based off Intel's [Neural Chat 7B V3.1](https://huggingface.co/Intel/neural-chat-7b-v3-1) as the base model along with three additional Mistral models. The Inairtra-7B architecture is based off: [**Mistral**](https://huggingface.co/mistralai/Mistral-7B-v0.1) The models used to create the Inairtra-7B are as follows: - Intel's Neural Chat 7B V3.1 ([Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1)) - Teknium's Airoboros Mistral 2.2 7B ([teknium/airoboros-mistral2.2-7b](https://huggingface.co/teknium/airoboros-mistral2.2-7b)) - Maywell's Synatra 7B V0.3 RP ([maywell/Synatra-7B-v0.3-RP](https://huggingface.co/maywell/Synatra-7B-v0.3-RP)) ## Prompt The Inairtra-7B *should* (but unsure) support the same prompts as featured in Intel's Neural Chat, Airoboros Mistral and Synatra. ### For Intel ``` ### System: {system} ### User: {usr} ### Assistant: ``` ### For Airoboros ``` USER: