--- library_name: peft base_model: meta-math/MetaMath-Mistral-7B license: apache-2.0 pipeline_tag: text2text-generation language: - en --- # Model Card for Model ID ## Model Details ### Model Description - **Developed by:** Timofej Kiselev (tfshaman) - **Model type:** Mistral finetuned for solving MWPs using symbolic expressions with SymPy - **Language(s) (NLP):** English, Python with SymPy - **License:** Apache-2.0 - **Finetuned from model [optional]:** meta-math/MetaMath-Mistral-7B - **Trained on:** Research Center for Informatics | CTU Prague, RCI cluster ### Model Sources [optional] - **Repository:** [More Information Needed] - **Paper [optional]:** https://dspace.cvut.cz/bitstream/handle/10467/115466/F3-BP-2024-Kiselev-Timofej-Thesis_Timofej_Kiselev.pdf ## Uses Input format: f"Question {your_math_word_problem}\n\nAnswer: " ### Direct Use ```python bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, ) config = PeftConfig.from_pretrained("tfshaman/SymPy-Mistral") base_model = AutoModelForCausalLM.from_pretrained("meta-math/MetaMath-Mistral-7B", quantization_config=bnb_config) tokenizer = AutoTokenizer.from_pretrained("tfshaman/SymPy-Mistral-tokenizer", use_fast=False, padding_side="left") base_model.resize_token_embeddings(len(tokenizer)) tokenizer.pad_token = "" tokenizer.padding_side='left' model = PeftModel.from_pretrained(base_model, "tfshaman/SymPy-Mistral", quantization_config=bnb_config) model = model.to("cuda") ``` [More Information Needed] ### Downstream Use [optional] [More Information Needed] ## Citation @mastersthesis{timofej2024velke, title={Velk{\'e} jazykov{\'e} modely pro numerick{\'e} dotazy}, author={Timofej, Kiselev}, type={{B.S.} thesis}, year={2024}, school={{\v{C}}esk{\'e} vysok{\'e} u{\v{c}}en{\'\i} technick{\'e} v Praze. Vypo{\v{c}}etn{\'\i} a informa{\v{c}}n{\'\i} centrum.} } ### Framework versions - PEFT 0.7.1