|
--- |
|
license: other |
|
license_name: tongyi-qianwen |
|
license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE |
|
language: |
|
- en |
|
- zh |
|
pipeline_tag: text-generation |
|
tags: |
|
- chat |
|
--- |
|
|
|
# Roleplay Quantization in EXL2 format for Magnum v1 |
|
|
|
Quantized using the [cleaned PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) roleplay dataset. Uploading as I didn't see anyone else do this one yet. |
|
|
|
[4.0bpw8h quants](https://huggingface.co/luigi86/magnum-72b-v1-exl2-rpcal/tree/4.0bpw8h) |
|
|
|
|
|
See [original model](https://huggingface.co/alpindale/magnum-72b-v1) for further details. |
|
|
|
|
|
# Original Model card |
|
|
|
![](https://files.catbox.moe/ngqnb1.png) |
|
|
|
This is the first in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of [Qwen-2 72B Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct). |
|
|
|
|
|
## Prompting |
|
Model has been Instruct tuned with the ChatML formatting. A typical input would look like this: |
|
|
|
```py |
|
"""<|im_start|>user |
|
Hi there!<|im_end|> |
|
<|im_start|>assistant |
|
Nice to meet you!<|im_end|> |
|
<|im_start|>user |
|
Can I ask a question?<|im_end|> |
|
<|im_start|>assistant |
|
""" |
|
``` |
|
|
|
## Credits |
|
|
|
This model has been a team effort, credits go to: |
|
|
|
- [Sao10K](https://huggingface.co/Sao10K) for help with (and cleaning up!) the dataset. |
|
- [alpindale](https://huggingface.co/alpindale) for the training. |
|
- [kalomaze](https://huggingface.co/kalomaze) for helping with the hyperparameter tuning. |
|
- Various other people for their continued help as we tuned the parameters, restarted failed runs. In no particular order: [Doctor Shotgun](https://huggingface.co/Doctor-Shotgun), [Lucy](https://huggingface.co/lucyknada), [Nopm](https://huggingface.co/nopm), [Mango](https://huggingface.co/MangoMango69420), and the rest of the Silly Tilly. |
|
|
|
And last but not least, we'd like to thank [Kearm](https://twitter.com/Nottlespike) for sponsoring the compute needed to train this model. |
|
|
|
## Training |
|
The training was done with 55 million tokens of high-quality RP data, over 1.5 epochs. We used 8x [AMD Instinct™ MI300X Accelerators](https://www.amd.com/en/products/accelerators/instinct/mi300/mi300x.html) for the full-parameter fine-tuning of the model. |
|
|
|
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
|
|
|
## Safety |
|
... |
|
|