--- base_model: mlabonne/ChimeraLlama-3-8B inference: false library_name: transformers license: other merged_models: - NousResearch/Meta-Llama-3-8B-Instruct - mlabonne/OrpoLlama-3-8B - Locutusque/Llama-3-Orca-1.0-8B - abacusai/Llama-3-Smaug-8B pipeline_tag: text-generation quantized_by: Suparious tags: - 4-bit - AWQ - text-generation - autotrain_compatible - endpoints_compatible - merge - mergekit - lazymergekit - llama --- # mlabonne/ChimeraLlama-3-8B AWQ - Model creator: [mlabonne](https://huggingface.co/mlabonne) - Original model: [ChimeraLlama-3-8B](https://huggingface.co/mlabonne/ChimeraLlama-3-8B) ## Model Summary ChimeraLlama-3-8B outperforms Llama 3 8B Instruct on Nous' benchmark suite. ChimeraLlama-3-8B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) * [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B) * [Locutusque/Llama-3-Orca-1.0-8B](https://huggingface.co/Locutusque/Llama-3-Orca-1.0-8B) * [abacusai/Llama-3-Smaug-8B](https://huggingface.co/abacusai/Llama-3-Smaug-8B) ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code