language: | |
- en | |
license: other | |
model_name: MythoMax L2 13B | |
base_model: Gryphe/MythoMax-L2-13b | |
inference: false | |
model_creator: Gryphe | |
model_type: llama | |
prompt_template: '``` | |
{system_message} | |
### Instruction: | |
{prompt} | |
(For roleplay purposes, I suggest the following - Write <CHAR NAME>''s next reply | |
in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.) | |
### Response: | |
``` | |
' | |
quantized_by: GusPuffy | |
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">Part of work associated with <a href="https://www.sentientsimulations.com/">sentientsimulations.com</a></p></div> | |
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> | |
# MythoMax L2 13B - SqueezeLLM | |
- Model creator: [Gryphe](https://huggingface.co/Gryphe) | |
- Original model: [MythoMax L2 13B](https://huggingface.co/Gryphe/MythoMax-L2-13b) | |
## Description | |
This repo contains SqueezeLLM model files for [Gryphe's MythoMax L2 13B](https://huggingface.co/Gryphe/MythoMax-L2-13b). | |
### About SqueezeLLM | |
https://github.com/SqueezeAILab/SqueezeLLM | |
Quantized using the steps here: https://github.com/SqueezeAILab/SqueezeLLM/tree/main/quantization | |
--- | |
license: other | |
--- | |