--- license: llama2 language: - en library_name: peft --- # **Introduction** This is a sft version of QWEN/QWEN-72B Model, and llamafied for leaderboard submission ## Details ### Used Librarys - torch - peft ### Used Datasets - Open-Orca/SlimOrca - No other dataset was used - No benchmark test set or the training set are used - [data contamination check](https://github.com/swj0419/detect-pretrain-code-contamination) result | Model | ARC | MMLU | TruthfulQA | GSM8K | |------------------------------|-------|-------|-------|-------| | **V1.4**| result < 0.1, %: TBU |result < 0.1, %: TBU | result < 0.1, %: 0.71 | result < 0.1, %: TBU | ### Used Environments - AMD MI250 & MoAI platform - Please visit https://moreh.io/product for more information about MoAI platform - Or, contact us directly [contact@moreh.io](mailto:contact@moreh.io) ## License TBU ## How to use ```python # pip install transformers==4.35.2 import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("moreh/MoMo-70B-LoRA-V1.4") model = AutoModelForCausalLM.from_pretrained( "moreh/MoMo-70B-LoRA-V1.4" ) ```