--- pipeline_tag: text2text-generation widget: - text: "CHNOS" inference: parameters: top_k: 0 top_p: 1 repetition_penalty: 1.2 --- # mol2pro You can download this model to directly perform inference using my description on GitHub. https://github.com/LDornfeld/mol2pro The repo is currently private. So just ask me for access. It contains the decoder and encoder tokenizer and the model at the last training save (87K). The model contains the right generation configuration already, so for the generation you can disable the flag `--setup_generationconfig`.