GPT-NEO-Model for Lean Tactics

In the project, we used an HuggingFace GPT-NEO small model and fine-tuned the tactic dataset. The Input should be of the form

<GOAL> Goal <PROOFSTEP>

The model can easily be accessed using the following code.

from transformers import GPT2Tokenizer, GPTNeoForCausalLM
import torch

tokenizer = GPT2Tokenizer.from_pretrained("Saisam/gpt-neo-math-small")
model = GPTNeoForCausalLM.from_pretrained("Saisam/gpt-neo-math-small")

More Information can be found at https://github.com/saisurbehera/mathProof.

The current model beats the GPT-F for minif2f benchmark

Worked along with Xihao Xhang and Moya Zhu

Downloads last month
38
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.