GPT-NEO-Model for Lean Tactics

In the project, we used an HuggingFace GPT-NEO small model and fine-tuned the tactic dataset. The Input should be of the form

<GOAL> Goal <PROOFSTEP>

The model can easily be accessed using the following code.

from transformers import GPT2Tokenizer, GPTNeoForCausalLM
import torch

tokenizer = GPT2Tokenizer.from_pretrained("Saisam/gpt-neo-math-small")
model = GPTNeoForCausalLM.from_pretrained("Saisam/gpt-neo-math-small")

More Information can be found at https://github.com/saisurbehera/mathProof.

The current model beats the GPT-F for minif2f benchmark

Worked along with Xihao Xhang and Moya Zhu

Downloads last month
18
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.