Deeokay commited on
Commit
1aa8b6a
1 Parent(s): 325d01c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -33
README.md CHANGED
@@ -48,20 +48,12 @@ tokenizer.pad_token_id = tokenizer.convert_tokens_to_ids('<|PAD|>')
48
  The order of tokens is as follows:
49
 
50
  ```python
51
- def combine_text(user_prompt, analysis, sentiment, new_response, classification):
52
- # user_prompt = strip_extra_spaces(user_prompt)
53
- # analysis = strip_extra_spaces(analysis)
54
- # sentiment = strip_extra_spaces(sentiment)
55
- # new_response = strip_extra_spaces(new_response)
56
- # new_response = remove_intro(new_response, "Ah, an excellent question!")
57
- # classification = strip_extra_spaces(classification)
58
-
59
-
60
  user_q = f"<|STOP|><|BEGIN_QUERY|>{user_prompt}<|END_QUERY|>"
61
  analysis = f"<|BEGIN_ANALYSIS|>{analysis}<|END_ANALYSIS|>"
62
  new_response = f"<|BEGIN_RESPONSE|>{new_response}<|END_RESPONSE|>"
63
- sentiment = f"<|BEGIN_SENTIMENT|>Sentiment: {sentiment}<|END_SENTIMENT|><|STOP|>"
64
  classification = f"<|BEGIN_CLASSIFICATION|>{classification}<|END_CLASSIFICATION|>"
 
65
  return user_q + analysis + new_response + classification + sentiment
66
  ```
67
 
@@ -75,7 +67,6 @@ from transformers import GPT2LMHeadModel, GPT2Tokenizer
75
 
76
  models_folder = "Deeokay/gpt2-medium-custom-v1.0"
77
 
78
-
79
  model = GPT2LMHeadModel.from_pretrained(models_folder)
80
  tokenizer = GPT2Tokenizer.from_pretrained(models_folder)
81
 
@@ -108,7 +99,6 @@ class Stopwatch:
108
  return "Stopwatch hasn't been stopped"
109
  return self.end_time - self.start_time
110
 
111
-
112
  stopwatch1 = Stopwatch()
113
 
114
  def generate_response(input_text, max_length=250):
@@ -247,25 +237,4 @@ Never tested on mathamatical knowledge.
247
  I quite enjoy how the response feels closer to what I had in mind..
248
 
249
 
250
- ### Model Description
251
-
252
- <!-- Provide a longer summary of what this model is. -->
253
-
254
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
255
-
256
- - **Developed by:** [More Information Needed]
257
- - **Funded by [optional]:** [More Information Needed]
258
- - **Shared by [optional]:** [More Information Needed]
259
- - **Model type:** [More Information Needed]
260
- - **Language(s) (NLP):** [More Information Needed]
261
- - **License:** [More Information Needed]
262
- - **Finetuned from model [optional]:** [More Information Needed]
263
-
264
- ### Model Sources [optional]
265
-
266
- <!-- Provide the basic links for the model. -->
267
-
268
- - **Repository:** [More Information Needed]
269
- - **Paper [optional]:** [More Information Needed]
270
- - **Demo [optional]:** [More Information Needed]
271
 
 
48
  The order of tokens is as follows:
49
 
50
  ```python
51
+ def combine_text(user_prompt, analysis, sentiment, new_response, classification):
 
 
 
 
 
 
 
 
52
  user_q = f"<|STOP|><|BEGIN_QUERY|>{user_prompt}<|END_QUERY|>"
53
  analysis = f"<|BEGIN_ANALYSIS|>{analysis}<|END_ANALYSIS|>"
54
  new_response = f"<|BEGIN_RESPONSE|>{new_response}<|END_RESPONSE|>"
 
55
  classification = f"<|BEGIN_CLASSIFICATION|>{classification}<|END_CLASSIFICATION|>"
56
+ sentiment = f"<|BEGIN_SENTIMENT|>Sentiment: {sentiment}<|END_SENTIMENT|><|STOP|>"
57
  return user_q + analysis + new_response + classification + sentiment
58
  ```
59
 
 
67
 
68
  models_folder = "Deeokay/gpt2-medium-custom-v1.0"
69
 
 
70
  model = GPT2LMHeadModel.from_pretrained(models_folder)
71
  tokenizer = GPT2Tokenizer.from_pretrained(models_folder)
72
 
 
99
  return "Stopwatch hasn't been stopped"
100
  return self.end_time - self.start_time
101
 
 
102
  stopwatch1 = Stopwatch()
103
 
104
  def generate_response(input_text, max_length=250):
 
237
  I quite enjoy how the response feels closer to what I had in mind..
238
 
239
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
240