t.me/xtekky commited on
Commit
011d0ba
1 Parent(s): 1010477

remove phind

Browse files
README.md CHANGED
@@ -40,7 +40,6 @@ Please note the following:
40
  | **Usage Examples** | | | |
41
  | `forefront` | Example usage for forefront (gpt-4) | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](./forefront/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen) | | |
42
  | `quora (poe)` | Example usage for quora | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](./quora/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen) | |
43
- | `phind` | Example usage for phind | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](./phind/README.md) | ![Inactive](https://img.shields.io/badge/Active-brightgreen) |
44
  | `you` | Example usage for you | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](./you/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen) |
45
  | **Try it Out** | | | |
46
  | Google Colab Jupyter Notebook | Example usage for gpt4free | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DanielShemesh/gpt4free-colab/blob/main/gpt4free.ipynb) | - |
@@ -65,7 +64,6 @@ Please note the following:
65
  | [writesonic.com](https://writesonic.com) | GPT-3.5 / Internet |
66
  | [t3nsor.com](https://t3nsor.com) | GPT-3.5 |
67
  | [you.com](https://you.com) | GPT-3.5 / Internet / good search|
68
- | [phind.com](https://phind.com) | GPT-4 / Internet / good search |
69
  | [sqlchat.ai](https://sqlchat.ai) | GPT-3.5 |
70
  | [chat.openai.com/chat](https://chat.openai.com/chat) | GPT-3.5 |
71
  | [bard.google.com](https://bard.google.com) | custom / search |
@@ -75,13 +73,10 @@ Please note the following:
75
  ## Best sites <a name="best-sites"></a>
76
 
77
  #### gpt-4
78
- - [`/phind`](./phind/README.md)
79
- - pro: only stable gpt-4 with streaming ( no limit )
80
- - contra: weird backend prompting
81
- - why not `ora` anymore ? gpt-4 requires login + limited
82
 
83
  #### gpt-3.5
84
- - looking for a stable api at the moment
85
 
86
  ## Install <a name="install"></a>
87
  download or clone this GitHub repo
 
40
  | **Usage Examples** | | | |
41
  | `forefront` | Example usage for forefront (gpt-4) | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](./forefront/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen) | | |
42
  | `quora (poe)` | Example usage for quora | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](./quora/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen) | |
 
43
  | `you` | Example usage for you | [![Link to File](https://img.shields.io/badge/Link-Go%20to%20File-blue)](./you/README.md) | ![Active](https://img.shields.io/badge/Active-brightgreen) |
44
  | **Try it Out** | | | |
45
  | Google Colab Jupyter Notebook | Example usage for gpt4free | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DanielShemesh/gpt4free-colab/blob/main/gpt4free.ipynb) | - |
 
64
  | [writesonic.com](https://writesonic.com) | GPT-3.5 / Internet |
65
  | [t3nsor.com](https://t3nsor.com) | GPT-3.5 |
66
  | [you.com](https://you.com) | GPT-3.5 / Internet / good search|
 
67
  | [sqlchat.ai](https://sqlchat.ai) | GPT-3.5 |
68
  | [chat.openai.com/chat](https://chat.openai.com/chat) | GPT-3.5 |
69
  | [bard.google.com](https://bard.google.com) | custom / search |
 
73
  ## Best sites <a name="best-sites"></a>
74
 
75
  #### gpt-4
76
+ - [`/forefront`](./forefront/README.md)
 
 
 
77
 
78
  #### gpt-3.5
79
+ - [`/you`](./you/README.md)
80
 
81
  ## Install <a name="install"></a>
82
  download or clone this GitHub repo
gui/README.md CHANGED
@@ -1,9 +1,11 @@
1
  # gpt4free gui
2
 
3
- preview:
 
4
 
 
5
  <img width="1125" alt="image" src="https://user-images.githubusercontent.com/98614666/234232398-09e9d3c5-08e6-4b8a-b4f2-0666e9790c7d.png">
6
-
7
- run:
8
 
 
 
9
  <img width="724" alt="image" src="https://user-images.githubusercontent.com/98614666/234232449-0d5cd092-a29d-4759-8197-e00ba712cb1a.png">
 
1
  # gpt4free gui
2
 
3
+ mode `streamlit_app.py` into base folder to run
4
+
5
 
6
+ preview:
7
  <img width="1125" alt="image" src="https://user-images.githubusercontent.com/98614666/234232398-09e9d3c5-08e6-4b8a-b4f2-0666e9790c7d.png">
 
 
8
 
9
+
10
+ run:
11
  <img width="724" alt="image" src="https://user-images.githubusercontent.com/98614666/234232449-0d5cd092-a29d-4759-8197-e00ba712cb1a.png">
gui/streamlit_app.py CHANGED
@@ -4,25 +4,16 @@ import sys
4
  sys.path.append(os.path.join(os.path.dirname(__file__), os.path.pardir))
5
 
6
  import streamlit as st
7
- import phind
8
-
9
- # Set cloudflare clearance and user agent
10
- phind.cloudflare_clearance = ''
11
- phind.phind_api = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36'
12
-
13
 
14
  def get_answer(question: str) -> str:
15
  # Set cloudflare clearance cookie and get answer from GPT-4 model
16
  try:
17
- result = phind.Completion.create(
18
- model='gpt-4',
19
- prompt=question,
20
- results=phind.Search.create(question, actualSearch=True),
21
- creative=False,
22
- detailed=False,
23
- codeContext=''
24
- )
25
- return result.completion.choices[0].text
26
  except Exception as e:
27
  # Return error message if an exception occurs
28
  return f'An error occurred: {e}. Please make sure you are using a valid cloudflare clearance token and user agent.'
 
4
  sys.path.append(os.path.join(os.path.dirname(__file__), os.path.pardir))
5
 
6
  import streamlit as st
7
+ import you
 
 
 
 
 
8
 
9
  def get_answer(question: str) -> str:
10
  # Set cloudflare clearance cookie and get answer from GPT-4 model
11
  try:
12
+ result = you.Completion.create(
13
+ prompt = question)
14
+
15
+ return result['response']
16
+
 
 
 
 
17
  except Exception as e:
18
  # Return error message if an exception occurs
19
  return f'An error occurred: {e}. Please make sure you are using a valid cloudflare clearance token and user agent.'
phind/README.md DELETED
@@ -1,34 +0,0 @@
1
- ### Example: `phind` (use like openai pypi package) <a name="example-phind"></a>
2
-
3
- ```python
4
- import phind
5
-
6
- # set cf_clearance cookie (needed again)
7
- phind.cf_clearance = 'xx.xx-1682166681-0-160'
8
- phind.user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36' # same as the one from browser you got cf_clearance from
9
-
10
- prompt = 'who won the quatar world cup'
11
-
12
- # help needed: not getting newlines from the stream, please submit a PR if you know how to fix this
13
- # stream completion
14
- for result in phind.StreamingCompletion.create(
15
- model = 'gpt-4',
16
- prompt = prompt,
17
- results = phind.Search.create(prompt, actualSearch = True), # create search (set actualSearch to False to disable internet)
18
- creative = False,
19
- detailed = False,
20
- codeContext = ''): # up to 3000 chars of code
21
-
22
- print(result.completion.choices[0].text, end='', flush=True)
23
-
24
- # normal completion
25
- result = phind.Completion.create(
26
- model = 'gpt-4',
27
- prompt = prompt,
28
- results = phind.Search.create(prompt, actualSearch = True), # create search (set actualSearch to False to disable internet)
29
- creative = False,
30
- detailed = False,
31
- codeContext = '') # up to 3000 chars of code
32
-
33
- print(result.completion.choices[0].text)
34
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
phind/__init__.py DELETED
@@ -1,289 +0,0 @@
1
- from datetime import datetime
2
- from queue import Queue, Empty
3
- from threading import Thread
4
- from time import time
5
- from urllib.parse import quote
6
-
7
- from curl_cffi.requests import post
8
-
9
- cf_clearance = ''
10
- user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36'
11
-
12
-
13
- class PhindResponse:
14
- class Completion:
15
- class Choices:
16
- def __init__(self, choice: dict) -> None:
17
- self.text = choice['text']
18
- self.content = self.text.encode()
19
- self.index = choice['index']
20
- self.logprobs = choice['logprobs']
21
- self.finish_reason = choice['finish_reason']
22
-
23
- def __repr__(self) -> str:
24
- return f'''<__main__.APIResponse.Completion.Choices(\n text = {self.text.encode()},\n index = {self.index},\n logprobs = {self.logprobs},\n finish_reason = {self.finish_reason})object at 0x1337>'''
25
-
26
- def __init__(self, choices: dict) -> None:
27
- self.choices = list(map(self.Choices, choices))
28
-
29
- class Usage:
30
- def __init__(self, usage_dict: dict) -> None:
31
- self.prompt_tokens = usage_dict['prompt_tokens']
32
- self.completion_tokens = usage_dict['completion_tokens']
33
- self.total_tokens = usage_dict['total_tokens']
34
-
35
- def __repr__(self):
36
- return f'''<__main__.APIResponse.Usage(\n prompt_tokens = {self.prompt_tokens},\n completion_tokens = {self.completion_tokens},\n total_tokens = {self.total_tokens})object at 0x1337>'''
37
-
38
- def __init__(self, response_dict: dict) -> None:
39
- self.response_dict = response_dict
40
- self.id = response_dict['id']
41
- self.object = response_dict['object']
42
- self.created = response_dict['created']
43
- self.model = response_dict['model']
44
- self.completion = self.Completion(response_dict['choices'])
45
- self.usage = self.Usage(response_dict['usage'])
46
-
47
- def json(self) -> dict:
48
- return self.response_dict
49
-
50
-
51
- class Search:
52
- def create(prompt: str, actualSearch: bool = True, language: str = 'en') -> dict: # None = no search
53
- if user_agent == '':
54
- raise ValueError('user_agent must be set, refer to documentation')
55
- if cf_clearance == '':
56
- raise ValueError('cf_clearance must be set, refer to documentation')
57
-
58
- if not actualSearch:
59
- return {
60
- '_type': 'SearchResponse',
61
- 'queryContext': {
62
- 'originalQuery': prompt
63
- },
64
- 'webPages': {
65
- 'webSearchUrl': f'https://www.bing.com/search?q={quote(prompt)}',
66
- 'totalEstimatedMatches': 0,
67
- 'value': []
68
- },
69
- 'rankingResponse': {
70
- 'mainline': {
71
- 'items': []
72
- }
73
- }
74
- }
75
-
76
- headers = {
77
- 'authority': 'www.phind.com',
78
- 'accept': '*/*',
79
- 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
80
- 'cookie': f'cf_clearance={cf_clearance}',
81
- 'origin': 'https://www.phind.com',
82
- 'referer': 'https://www.phind.com/search?q=hi&c=&source=searchbox&init=true',
83
- 'sec-ch-ua': '"Chromium";v="112", "Google Chrome";v="112", "Not:A-Brand";v="99"',
84
- 'sec-ch-ua-mobile': '?0',
85
- 'sec-ch-ua-platform': '"macOS"',
86
- 'sec-fetch-dest': 'empty',
87
- 'sec-fetch-mode': 'cors',
88
- 'sec-fetch-site': 'same-origin',
89
- 'user-agent': user_agent
90
- }
91
-
92
- return post('https://www.phind.com/api/bing/search', headers=headers, json={
93
- 'q': prompt,
94
- 'userRankList': {},
95
- 'browserLanguage': language}).json()['rawBingResults']
96
-
97
-
98
- class Completion:
99
- def create(
100
- model='gpt-4',
101
- prompt: str = '',
102
- results: dict = None,
103
- creative: bool = False,
104
- detailed: bool = False,
105
- codeContext: str = '',
106
- language: str = 'en') -> PhindResponse:
107
-
108
- if user_agent == '':
109
- raise ValueError('user_agent must be set, refer to documentation')
110
-
111
- if cf_clearance == '':
112
- raise ValueError('cf_clearance must be set, refer to documentation')
113
-
114
- if results is None:
115
- results = Search.create(prompt, actualSearch=True)
116
-
117
- if len(codeContext) > 2999:
118
- raise ValueError('codeContext must be less than 3000 characters')
119
-
120
- models = {
121
- 'gpt-4': 'expert',
122
- 'gpt-3.5-turbo': 'intermediate',
123
- 'gpt-3.5': 'intermediate',
124
- }
125
-
126
- json_data = {
127
- 'question': prompt,
128
- 'bingResults': results, # response.json()['rawBingResults'],
129
- 'codeContext': codeContext,
130
- 'options': {
131
- 'skill': models[model],
132
- 'date': datetime.now().strftime("%d/%m/%Y"),
133
- 'language': language,
134
- 'detailed': detailed,
135
- 'creative': creative
136
- }
137
- }
138
-
139
- headers = {
140
- 'authority': 'www.phind.com',
141
- 'accept': '*/*',
142
- 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
143
- 'content-type': 'application/json',
144
- 'cookie': f'cf_clearance={cf_clearance}',
145
- 'origin': 'https://www.phind.com',
146
- 'referer': 'https://www.phind.com/search?q=hi&c=&source=searchbox&init=true',
147
- 'sec-ch-ua': '"Chromium";v="112", "Google Chrome";v="112", "Not:A-Brand";v="99"',
148
- 'sec-ch-ua-mobile': '?0',
149
- 'sec-ch-ua-platform': '"macOS"',
150
- 'sec-fetch-dest': 'empty',
151
- 'sec-fetch-mode': 'cors',
152
- 'sec-fetch-site': 'same-origin',
153
- 'user-agent': user_agent
154
- }
155
-
156
- completion = ''
157
- response = post('https://www.phind.com/api/infer/answer', headers=headers, json=json_data, timeout=99999,
158
- impersonate='chrome110')
159
- for line in response.text.split('\r\n\r\n'):
160
- completion += (line.replace('data: ', ''))
161
-
162
- return PhindResponse({
163
- 'id': f'cmpl-1337-{int(time())}',
164
- 'object': 'text_completion',
165
- 'created': int(time()),
166
- 'model': models[model],
167
- 'choices': [{
168
- 'text': completion,
169
- 'index': 0,
170
- 'logprobs': None,
171
- 'finish_reason': 'stop'
172
- }],
173
- 'usage': {
174
- 'prompt_tokens': len(prompt),
175
- 'completion_tokens': len(completion),
176
- 'total_tokens': len(prompt) + len(completion)
177
- }
178
- })
179
-
180
-
181
- class StreamingCompletion:
182
- message_queue = Queue()
183
- stream_completed = False
184
-
185
- def request(model, prompt, results, creative, detailed, codeContext, language) -> None:
186
-
187
- models = {
188
- 'gpt-4': 'expert',
189
- 'gpt-3.5-turbo': 'intermediate',
190
- 'gpt-3.5': 'intermediate',
191
- }
192
-
193
- json_data = {
194
- 'question': prompt,
195
- 'bingResults': results,
196
- 'codeContext': codeContext,
197
- 'options': {
198
- 'skill': models[model],
199
- 'date': datetime.now().strftime("%d/%m/%Y"),
200
- 'language': language,
201
- 'detailed': detailed,
202
- 'creative': creative
203
- }
204
- }
205
-
206
- headers = {
207
- 'authority': 'www.phind.com',
208
- 'accept': '*/*',
209
- 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
210
- 'content-type': 'application/json',
211
- 'cookie': f'cf_clearance={cf_clearance}',
212
- 'origin': 'https://www.phind.com',
213
- 'referer': 'https://www.phind.com/search?q=hi&c=&source=searchbox&init=true',
214
- 'sec-ch-ua': '"Chromium";v="112", "Google Chrome";v="112", "Not:A-Brand";v="99"',
215
- 'sec-ch-ua-mobile': '?0',
216
- 'sec-ch-ua-platform': '"macOS"',
217
- 'sec-fetch-dest': 'empty',
218
- 'sec-fetch-mode': 'cors',
219
- 'sec-fetch-site': 'same-origin',
220
- 'user-agent': user_agent
221
- }
222
-
223
- response = post('https://www.phind.com/api/infer/answer',
224
- headers=headers, json=json_data, timeout=99999, impersonate='chrome110',
225
- content_callback=StreamingCompletion.handle_stream_response)
226
-
227
- StreamingCompletion.stream_completed = True
228
-
229
- @staticmethod
230
- def create(
231
- model: str = 'gpt-4',
232
- prompt: str = '',
233
- results: dict = None,
234
- creative: bool = False,
235
- detailed: bool = False,
236
- codeContext: str = '',
237
- language: str = 'en'):
238
-
239
- if user_agent == '':
240
- raise ValueError('user_agent must be set, refer to documentation')
241
- if cf_clearance == '':
242
- raise ValueError('cf_clearance must be set, refer to documentation')
243
-
244
- if results is None:
245
- results = Search.create(prompt, actualSearch=True)
246
-
247
- if len(codeContext) > 2999:
248
- raise ValueError('codeContext must be less than 3000 characters')
249
-
250
- Thread(target=StreamingCompletion.request, args=[
251
- model, prompt, results, creative, detailed, codeContext, language]).start()
252
-
253
- while StreamingCompletion.stream_completed != True or not StreamingCompletion.message_queue.empty():
254
- try:
255
- chunk = StreamingCompletion.message_queue.get(timeout=0)
256
-
257
- if chunk == b'data: \r\ndata: \r\ndata: \r\n\r\n':
258
- chunk = b'data: \n\n\r\n\r\n'
259
-
260
- chunk = chunk.decode()
261
-
262
- chunk = chunk.replace('data: \r\n\r\ndata: ', 'data: \n')
263
- chunk = chunk.replace('\r\ndata: \r\ndata: \r\n\r\n', '\n\n\r\n\r\n')
264
- chunk = chunk.replace('data: ', '').replace('\r\n\r\n', '')
265
-
266
- yield PhindResponse({
267
- 'id': f'cmpl-1337-{int(time())}',
268
- 'object': 'text_completion',
269
- 'created': int(time()),
270
- 'model': model,
271
- 'choices': [{
272
- 'text': chunk,
273
- 'index': 0,
274
- 'logprobs': None,
275
- 'finish_reason': 'stop'
276
- }],
277
- 'usage': {
278
- 'prompt_tokens': len(prompt),
279
- 'completion_tokens': len(chunk),
280
- 'total_tokens': len(prompt) + len(chunk)
281
- }
282
- })
283
-
284
- except Empty:
285
- pass
286
-
287
- @staticmethod
288
- def handle_stream_response(response):
289
- StreamingCompletion.message_queue.put(response)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
phind/__pycache__/__init__.cpython-311.pyc DELETED
Binary file (8.64 kB)
 
testing/phind_test.py DELETED
@@ -1,34 +0,0 @@
1
- import phind
2
-
3
- # set cf_clearance cookie ( not needed at the moment)
4
- phind.cf_clearance = 'MDzwnr3ZWk_ap8u.iwwMR5F3WccfOkhUy_zGNDpcF3s-1682497341-0-160'
5
- phind.user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36'
6
-
7
- prompt = 'hello world'
8
-
9
- # normal completion
10
- result = phind.Completion.create(
11
- model='gpt-4',
12
- prompt=prompt,
13
- results=phind.Search.create(prompt, actualSearch=False),
14
- # create search (set actualSearch to False to disable internet)
15
- creative=False,
16
- detailed=False,
17
- codeContext='') # up to 3000 chars of code
18
-
19
- print(result.completion.choices[0].text)
20
-
21
- prompt = 'who won the quatar world cup'
22
-
23
- # help needed: not getting newlines from the stream, please submit a PR if you know how to fix this
24
- # stream completion
25
- for result in phind.StreamingCompletion.create(
26
- model='gpt-4',
27
- prompt=prompt,
28
- results=phind.Search.create(prompt, actualSearch=True),
29
- # create search (set actualSearch to False to disable internet)
30
- creative=False,
31
- detailed=False,
32
- codeContext=''): # up to 3000 chars of code
33
-
34
- print(result.completion.choices[0].text, end='', flush=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
unfinished/easyai/main.py DELETED
@@ -1,42 +0,0 @@
1
- # Import necessary libraries
2
- from json import loads
3
- from os import urandom
4
-
5
- from requests import get
6
-
7
- # Generate a random session ID
8
- sessionId = urandom(10).hex()
9
-
10
- # Set up headers for the API request
11
- headers = {
12
- 'Accept': 'text/event-stream',
13
- 'Accept-Language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
14
- 'Cache-Control': 'no-cache',
15
- 'Connection': 'keep-alive',
16
- 'Pragma': 'no-cache',
17
- 'Referer': 'http://easy-ai.ink/chat',
18
- 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36',
19
- 'token': 'null',
20
- }
21
-
22
- # Main loop to interact with the AI
23
- while True:
24
- # Get user input
25
- prompt = input('you: ')
26
-
27
- # Set up parameters for the API request
28
- params = {
29
- 'message': prompt,
30
- 'sessionId': sessionId
31
- }
32
-
33
- # Send request to the API and process the response
34
- for chunk in get('http://easy-ai.ink/easyapi/v1/chat/completions', params=params,
35
- headers=headers, verify=False, stream=True).iter_lines():
36
-
37
- # Check if the chunk contains the 'content' field
38
- if b'content' in chunk:
39
- # Parse the JSON data and print the content
40
- data = loads(chunk.decode('utf-8').split('data:')[1])
41
-
42
- print(data['content'], end='')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
you/__init__.py CHANGED
@@ -57,8 +57,7 @@ class Completion:
57
  r'(?<=event: youChatSerpResults\ndata:)(.*\n)*?(?=event: )', response.text
58
  ).group()
59
  third_party_search_results = re.search(
60
- r'(?<=event: thirdPartySearchResults\ndata:)(.*\n)*?(?=event: )', response.text
61
- ).group()
62
  # slots = findall(r"slots\ndata: (.*)\n\nevent", response.text)[0]
63
 
64
  text = ''.join(re.findall(r'{\"youChatToken\": \"(.*?)\"}', response.text))
@@ -69,7 +68,7 @@ class Completion:
69
  }
70
 
71
  return {
72
- 'response': text.replace('\\n', '\n').replace('\\\\', '\\'),
73
  'links': loads(third_party_search_results)['search']['third_party_search_results']
74
  if include_links
75
  else None,
 
57
  r'(?<=event: youChatSerpResults\ndata:)(.*\n)*?(?=event: )', response.text
58
  ).group()
59
  third_party_search_results = re.search(
60
+ r'(?<=event: thirdPartySearchResults\ndata:)(.*\n)*?(?=event: )', response.text).group()
 
61
  # slots = findall(r"slots\ndata: (.*)\n\nevent", response.text)[0]
62
 
63
  text = ''.join(re.findall(r'{\"youChatToken\": \"(.*?)\"}', response.text))
 
68
  }
69
 
70
  return {
71
+ 'response': text.replace('\\n', '\n').replace('\\\\', '\\').replace('\\"', '"'),
72
  'links': loads(third_party_search_results)['search']['third_party_search_results']
73
  if include_links
74
  else None,