kitrakrev commited on
Commit
3b4a171
β€’
1 Parent(s): bb6173d

initial commit

Browse files
Files changed (2) hide show
  1. app.py +320 -0
  2. requirements.txt +8 -0
app.py ADDED
@@ -0,0 +1,320 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ import gradio as gr
4
+ from huggingface_hub import Repository
5
+ from text_generation import Client
6
+
7
+ # from dialogues import DialogueTemplate
8
+ from share_btn import (community_icon_html, loading_icon_html, share_btn_css,
9
+ share_js)
10
+
11
+ HF_TOKEN = os.environ.get("HF_TOKEN", None)
12
+ API_TOKEN = os.environ.get("API_TOKEN", None)
13
+ API_URL = os.environ.get("API_URL", None)
14
+ API_URL = "https://api-inference.huggingface.co/models/timdettmers/guanaco-33b-merged"
15
+
16
+ client = Client(
17
+ API_URL,
18
+ headers={"Authorization": f"Bearer {API_TOKEN}"},
19
+ )
20
+
21
+ repo = None
22
+
23
+
24
+ def get_total_inputs(inputs, chatbot, preprompt, user_name, assistant_name, sep):
25
+ past = []
26
+ for data in chatbot:
27
+ user_data, model_data = data
28
+
29
+ if not user_data.startswith(user_name):
30
+ user_data = user_name + user_data
31
+ if not model_data.startswith(sep + assistant_name):
32
+ model_data = sep + assistant_name + model_data
33
+
34
+ past.append(user_data + model_data.rstrip() + sep)
35
+
36
+ if not inputs.startswith(user_name):
37
+ inputs = user_name + inputs
38
+
39
+ total_inputs = preprompt + "".join(past) + inputs + sep + assistant_name.rstrip()
40
+
41
+ return total_inputs
42
+
43
+
44
+ def has_no_history(chatbot, history):
45
+ return not chatbot and not history
46
+
47
+
48
+
49
+ header = """My name is Karthik raja, I live in Chennai, India. I recently completed my bachelors at SSN College of Engineering.He is an experienced programmer, I have honed my skills in competitive programming and machine learning. Through my work in these areas, I have
50
+ developed a strong foundation in data analysis and model selection, which has allowed me to achieve high accuracy in my projects. My expertise
51
+ extends to computer vision and natural language processing, and I am particularly interested in exploring cutting‐edge techniques like few‐shot
52
+ learning and other meta‐learning methods to enhance NLP applications. I have taken part in several ML competitions, including Imageclef and
53
+ Hasoc, and have consistently ranked highly. I have also been exploring multilingual model analysis, leveraging the power of few‐shot learning
54
+ to develop highly efficient and accurate models. Overall, my expertise in programming, machine learning, and NLP, combined with my passion
55
+ for exploring cutting‐edge techniques such as few‐shot learning, make me a valuable asset to any team.
56
+ I completed my bachelors in SSN College Of Engineering Chennai, India in Computer Science and Engineering with a consolidated CGPA score of 8.9, betweeen 2019 to 2023.And this is my highest degree of qualification.
57
+ I did my industry internship at Citi Corp,India as a Website Developer between May 2022 and Aug 2022.
58
+ In this internship opportunity I was able to collabore with with a four‐member team to develop a full fledged website using springtools with data extraction from H2 database.
59
+ I have a stellar research profile as well, I have published 3 papers in conferences and 1 is underreview in a journal.
60
+ My first publication is on Neural Network for TB analysis which was created for CEURS-WS conference Image Clef contest published in 2021.
61
+ Second being Abusive and Threatening Language
62
+ Detection in Native Urdu Script Tweets Exploring Four Conventional Machine Learning Techniques and MLP
63
+ Fire conference where we used Naive Bayes,LSTM BERT with different tokenizing methods with translation.
64
+ Third being paper titled Offensive Text Prediction using Machine
65
+ Learning and Deep Learning Approaches Ceur‐ws conference, where we explored bagging like techniques with the models mentioned above.
66
+ I was able to publish my Final Year Project in a journal,Counterfactual Detection Neural Processing
67
+ Letters, this is under review.
68
+ Apart from papers I have also contributed to creation of application for the
69
+ National Institute of Siddha – Ministry of AYUSH(GoI), AIIMS Jodhpur, the Siddha Expert System between Sep‐Nov 2022, which was used to
70
+ Analyzed Siddha prognosis transcripts written in the Tamil regional language and Built an expert system to perform a nine‐way classification of Siddha diseases.
71
+ I was also able to work for the Tamil Nadu State Police for Suspicious Vehicle Tracking System through multiple cameras between Feb 2022 ‐ July 2022.
72
+ Here we Analysed various DeepLearning models for feature extraction, techniques like key frame extraction and Explored various matching models like siamese and metric mesures like cosine distance for vehicle Reid.
73
+ We had to Use prebuilt kalman filter and DeepSORT models to increase precision and to avoid occlusion.In this project we Experimented with various object detection, localization, and tracking models.
74
+ In another one of my research endevors we were able to develop an arm prototype for a underwater vehicle for UnderWater Remote Operated Vehicle Lab in my undergrad college.
75
+ For this I Helped design an grabber arm using CAD, trained Yolo models for object detection and worked on design and movement for the arm,
76
+ Some of my other projects include
77
+ Non‐residential Builtup Area classification from medium resolution satellite Chennai, India
78
+ India Meteorological Department (IMD), Ministry of Earth Sciences (MoES). (for this we won the Smart India
79
+ Hackathon ).
80
+ Person ReId in a large scale system in undergrad college.
81
+ I have also contributed to open source and have regularly been part of octoberFest, and have contributed to popular libraries like Ivy Unify, for more info check out https://github.com/kitrak-rev.
82
+ Connect with me on either: https://www.linkedin.com/in/kitrak-rev/, or https://github.com/kitrak-rev.
83
+ These are my profile links
84
+ In my college I held the following positions:
85
+ β€’ IEEECS Student Chapter Core Commitee Member (Vice Chair)
86
+ β€’ IEEE Student Chapter Core Commitee Member (Treasurer)
87
+ β€’ ACM Student Chapter Core Commitee Member (Event Deputy Head)
88
+ β€’ Computer Society of India Student Chapter Core Committee Member (Vice Chair)
89
+ β€’ SSN Coding Club Commitee Member (Competitive Programming Team)
90
+ I was given the task to explain BART model and its usage in Dall‐e like models in IVA pre‐conference workshop 2023.
91
+ My fullname is karthik Raja Anandan.
92
+ Assume you are karthik Raja Anandan mentioned in the above text, keeping this in mind, give polite answers to the following questions in first person. """
93
+ prompt_template = "###"+header+" Human: {query}\n### Assistant:{response}"
94
+
95
+
96
+ def generate(
97
+ user_message,
98
+ chatbot,
99
+ history,
100
+ temperature,
101
+ top_p,
102
+ max_new_tokens,
103
+ repetition_penalty,
104
+ ):
105
+ # Don't return meaningless message when the input is empty
106
+ if not user_message:
107
+ print("Empty input")
108
+
109
+ history.append(user_message)
110
+
111
+ past_messages = []
112
+ for data in chatbot:
113
+ user_data, model_data = data
114
+
115
+ past_messages.extend(
116
+ [{"role": "user", "content": user_data}, {"role": "assistant", "content": model_data.rstrip()}]
117
+ )
118
+
119
+ if len(past_messages) < 1:
120
+ prompt = header + prompt_template.format(query=user_message, response="")
121
+ else:
122
+ prompt = header
123
+ for i in range(0, len(past_messages), 2):
124
+ intermediate_prompt = prompt_template.format(query=past_messages[i]["content"], response=past_messages[i+1]["content"])
125
+ print("intermediate: ", intermediate_prompt)
126
+ prompt = prompt + '\n' + intermediate_prompt
127
+
128
+ prompt = prompt + prompt_template.format(query=user_message, response="")
129
+
130
+
131
+ generate_kwargs = {
132
+ "temperature": temperature,
133
+ "top_p": top_p,
134
+ "max_new_tokens": max_new_tokens,
135
+ }
136
+
137
+ temperature = float(temperature)
138
+ if temperature < 1e-2:
139
+ temperature = 1e-2
140
+ top_p = float(top_p)
141
+
142
+ generate_kwargs = dict(
143
+ temperature=temperature,
144
+ max_new_tokens=max_new_tokens,
145
+ top_p=top_p,
146
+ repetition_penalty=repetition_penalty,
147
+ do_sample=True,
148
+ truncate=999,
149
+ seed=42,
150
+ )
151
+
152
+ stream = client.generate_stream(
153
+ prompt,
154
+ **generate_kwargs,
155
+ )
156
+
157
+ output = ""
158
+ for idx, response in enumerate(stream):
159
+ if response.token.text == '':
160
+ break
161
+
162
+ if response.token.special:
163
+ continue
164
+ output += response.token.text
165
+ if idx == 0:
166
+ history.append(" " + output)
167
+ else:
168
+ history[-1] = output
169
+
170
+ chat = [(history[i].strip(), history[i + 1].strip()) for i in range(0, len(history) - 1, 2)]
171
+
172
+ yield chat, history, user_message, ""
173
+
174
+ return chat, history, user_message, ""
175
+
176
+
177
+ examples = [
178
+ "A Llama entered in my garden, what should I do?"
179
+ ]
180
+
181
+
182
+ def clear_chat():
183
+ return [], []
184
+
185
+
186
+ def process_example(args):
187
+ for [x, y] in generate(args):
188
+ pass
189
+ return [x, y]
190
+
191
+
192
+ title = """<h1 align="center">Guanaco Playground πŸ’¬</h1>"""
193
+ custom_css = """
194
+ #banner-image {
195
+ display: block;
196
+ margin-left: auto;
197
+ margin-right: auto;
198
+ }
199
+ #chat-message {
200
+ font-size: 14px;
201
+ min-height: 300px;
202
+ }
203
+ """
204
+
205
+ with gr.Blocks(analytics_enabled=False, css=custom_css) as demo:
206
+ gr.HTML(title)
207
+
208
+ with gr.Row():
209
+ with gr.Column():
210
+ gr.Markdown(
211
+ """
212
+ πŸ’» This demo attempts to be a ai-clone of a person with prompts on the Guanaco 33B model, released together with the paper [QLoRA](https://arxiv.org/abs/2305.14314)
213
+ <br />
214
+ Note: The information given by the AI-clone may not be 100% accurate, check with the bot's owner to confirm.
215
+ """
216
+ )
217
+
218
+ with gr.Row():
219
+ with gr.Box():
220
+ output = gr.Markdown("Ask any questions that you want to ask Karthik Raja")
221
+ chatbot = gr.Chatbot(elem_id="chat-message", label="AI-clone of Karthik Raja")
222
+
223
+ with gr.Row():
224
+ with gr.Column(scale=3):
225
+ user_message = gr.Textbox(placeholder="Enter your message here", show_label=False, elem_id="q-input")
226
+ with gr.Row():
227
+ send_button = gr.Button("Send", elem_id="send-btn", visible=True)
228
+
229
+ clear_chat_button = gr.Button("Clear chat", elem_id="clear-btn", visible=True)
230
+
231
+ with gr.Accordion(label="Parameters", open=False, elem_id="parameters-accordion"):
232
+ temperature = gr.Slider(
233
+ label="Temperature",
234
+ value=0.7,
235
+ minimum=0.0,
236
+ maximum=1.0,
237
+ step=0.1,
238
+ interactive=True,
239
+ info="Higher values produce more diverse outputs",
240
+ )
241
+ top_p = gr.Slider(
242
+ label="Top-p (nucleus sampling)",
243
+ value=0.9,
244
+ minimum=0.0,
245
+ maximum=1,
246
+ step=0.05,
247
+ interactive=True,
248
+ info="Higher values sample more low-probability tokens",
249
+ )
250
+ max_new_tokens = gr.Slider(
251
+ label="Max new tokens",
252
+ value=1024,
253
+ minimum=0,
254
+ maximum=2048,
255
+ step=4,
256
+ interactive=True,
257
+ info="The maximum numbers of new tokens",
258
+ )
259
+ repetition_penalty = gr.Slider(
260
+ label="Repetition Penalty",
261
+ value=1.2,
262
+ minimum=0.0,
263
+ maximum=10,
264
+ step=0.1,
265
+ interactive=True,
266
+ info="The parameter for repetition penalty. 1.0 means no penalty.",
267
+ )
268
+ with gr.Row():
269
+ gr.Examples(
270
+ examples=examples,
271
+ inputs=[user_message],
272
+ cache_examples=False,
273
+ fn=process_example,
274
+ outputs=[output],
275
+ )
276
+
277
+ with gr.Row():
278
+ gr.Markdown(
279
+ "Disclaimer: The model can produce factually incorrect output, and should not be relied on to produce "
280
+ "factually accurate information. The model was trained on various public datasets; while great efforts "
281
+ "have been taken to clean the pretraining data, it is possible that this model could generate lewd, "
282
+ "biased, or otherwise offensive outputs.",
283
+ elem_classes=["disclaimer"],
284
+ )
285
+
286
+
287
+ history = gr.State([])
288
+ last_user_message = gr.State("")
289
+
290
+ user_message.submit(
291
+ generate,
292
+ inputs=[
293
+ user_message,
294
+ chatbot,
295
+ history,
296
+ temperature,
297
+ top_p,
298
+ max_new_tokens,
299
+ repetition_penalty,
300
+ ],
301
+ outputs=[chatbot, history, last_user_message, user_message],
302
+ )
303
+
304
+ send_button.click(
305
+ generate,
306
+ inputs=[
307
+ user_message,
308
+ chatbot,
309
+ history,
310
+ temperature,
311
+ top_p,
312
+ max_new_tokens,
313
+ repetition_penalty,
314
+ ],
315
+ outputs=[chatbot, history, last_user_message, user_message],
316
+ )
317
+
318
+ clear_chat_button.click(clear_chat, outputs=[chatbot, history])
319
+
320
+ demo.queue(concurrency_count=16).launch(debug=True)
requirements.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ einops
2
+ gradio
3
+ torch
4
+ transformers
5
+ sentencepiece
6
+ bitsandbytes
7
+ accelerate
8
+ text-generation