--- base_model: jeiku/Average_Normie_v2_l3_8B inference: false library_name: transformers merged_models: - ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B - vicgalle/Roleplay-Llama-3-8B - cgato/L3-TheSpice-8b-v0.1.3 - ResplendentAI/Kei_Llama3_8B pipeline_tag: text-generation quantized_by: Suparious tags: - mergekit - merge - 4-bit - AWQ - text-generation - autotrain_compatible - endpoints_compatible --- # jeiku/Average_Normie_v2_l3_8B AWQ - Model creator: [jeiku](https://huggingface.co/jeiku) - Original model: [Average_Normie_v2_l3_8B](https://huggingface.co/jeiku/Average_Normie_v2_l3_8B) ## Model Summary This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [ResplendentAI/Kei_Llama3_8B](https://huggingface.co/ResplendentAI/Kei_Llama3_8B) as a base. The following models were included in the merge: * [ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B) * [vicgalle/Roleplay-Llama-3-8B](https://huggingface.co/vicgalle/Roleplay-Llama-3-8B) * [cgato/L3-TheSpice-8b-v0.1.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.1.3) ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code