Sharathhebbar24 commited on
Commit
c608257
1 Parent(s): 6ce8268

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - maths
8
+ - gpt2
9
+ - mathgpt2
10
+ datasets:
11
+ - meta-math/MetaMathQA
12
+ widget:
13
+ - text: Which motion is formed by an incident particle?
14
+ example_title: Example 1
15
+ - text: What type of diffusional modeling is used for diffusion?
16
+ example_title: Example 2
17
+ ---
18
+
19
+ This model is a finetuned version of ```gpt2``` using ```meta-math/MetaMathQA```
20
+
21
+ ## Model description
22
+
23
+ GPT-2 is a transformers model pre-trained on a very large corpus of English data in a self-supervised fashion. This
24
+ means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots
25
+ of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
26
+ it was trained to guess the next word in sentences.
27
+
28
+ More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
29
+ shifting one token (word or piece of word) to the right. The model uses a masking mechanism to make sure the
30
+ predictions for the token `i` only use the inputs from `1` to `i` but not the future tokens.
31
+
32
+ This way, the model learns an inner representation of the English language that can then be used to extract features
33
+ useful for downstream tasks. The model is best at what it was trained for, however, which is generating texts from a
34
+ prompt.
35
+
36
+ ### To use this model
37
+
38
+ ```python
39
+ >>> from transformers import AutoTokenizer, AutoModelForCausalLM
40
+ >>> model_name = "Sharathhebbar24/math_gpt2"
41
+ >>> model = AutoModelForCausalLM.from_pretrained(model_name)
42
+ >>> tokenizer = AutoTokenizer.from_pretrained(model_name)
43
+ >>> def generate_text(prompt):
44
+ >>> inputs = tokenizer.encode(prompt, return_tensors='pt')
45
+ >>> outputs = model.generate(inputs, max_length=64, pad_token_id=tokenizer.eos_token_id)
46
+ >>> generated = tokenizer.decode(outputs[0], skip_special_tokens=True)
47
+ >>> return generated[:generated.rfind(".")+1]
48
+ >>> prompt = "Gracie and Joe are choosing numbers on the complex plane. Joe chooses the point $1+2i$. Gracie chooses $-1+i$. How far apart are Gracie and Joe's points?"
49
+ >>> res = generate_text(prompt)
50
+ >>> res
51
+ ```