--- license: other license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE language: - en - zh pipeline_tag: text-generation tags: - chat --- ## This repo contains EXL2 quants of the model. If you need the original weights, please find them [here](https://huggingface.co/anthracite-org/magnum-32b-v1). ## Base repo only contains the measurement file, see revisions for your quant of choice. - [measurement.json](https://huggingface.co/anthracite-org/magnum-32b-v1-exl2/tree/main) - [3.0bpw](https://huggingface.co/anthracite-org/magnum-32b-v1-exl2/tree/3.0bpw) - [4.0bpw](https://huggingface.co/anthracite-org/magnum-32b-v1-exl2/tree/4.0bpw) - [5.0bpw](https://huggingface.co/anthracite-org/magnum-32b-v1-exl2/tree/5.0bpw) - [6.0bpw](https://huggingface.co/anthracite-org/magnum-32b-v1-exl2/tree/6.0bpw) - [8.0bpw](https://huggingface.co/anthracite-org/magnum-32b-v1-exl2/tree/8.0bpw) (uploading soon) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/635567189c72a7e742f1419c/PK7xRSd18Du0bX-w_t-9c.png) This is the second in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of [Qwen1.5 32B](https://huggingface.co/Qwen/Qwen1.5-32B). ## Prompting Model has been Instruct tuned with the ChatML formatting. A typical input would look like this: ```py """<|im_start|>user Hi there!<|im_end|> <|im_start|>assistant Nice to meet you!<|im_end|> <|im_start|>user Can I ask a question?<|im_end|> <|im_start|>assistant """ ``` ## Credits Three new general purpose instruction following datasets were added on top of the original Stheno dataset (which had certain low quality entries purged/removed). The first two were designed specifically for the Magnum series, to better address prompt adherence and coherence: - [kalomaze/Opus_Instruct_25k](https://huggingface.co/datasets/kalomaze/Opus_Instruct_25k) - [Nopm/Opus_WritingStruct](https://huggingface.co/datasets/Nopm/Opus_WritingStruct) - [Gryphe/Sonnet3.5-SlimOrcaDedupCleaned](https://huggingface.co/datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned) (A ~16k rows subset) This model has been a team effort, and the credits goes to all members of Anthracite. ## Training The training was done for 2 epochs with a learning rate of 1e-05. We used 8x [NVIDIA H100 Tensor Core](https://www.nvidia.com/en-us/data-center/h100/) GPUs for the full-parameter fine-tuning of the model. [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) ## Safety ...