leonsimon23 commited on
Commit
ac13c4a
1 Parent(s): 83c31fa

Upload 11 files

Browse files
Files changed (11) hide show
  1. .gitignore +29 -0
  2. LICENSE +201 -0
  3. agents.py +132 -0
  4. agents2.py +38 -0
  5. agents3.py +105 -0
  6. agents4.py +107 -0
  7. index.html +13 -0
  8. package.json +28 -0
  9. requirements.txt +58 -0
  10. server.py +209 -0
  11. vite.config.js +18 -0
.gitignore ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
2
+
3
+ # dependencies
4
+ /node_modules
5
+ /.pnp
6
+ .pnp.js
7
+
8
+ # testing
9
+ /coverage
10
+
11
+ # production
12
+ /build
13
+ /dist
14
+ # misc
15
+ .DS_Store
16
+ .env.local
17
+ .env.development.local
18
+ .env.test.local
19
+ .env.production.local
20
+
21
+ npm-debug.log*
22
+ yarn-debug.log*
23
+ yarn-error.log*
24
+ package-lock.json
25
+ *.pyc
26
+ *.env
27
+ *summary.tex
28
+ finish-sami_key.pem
29
+
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
agents.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # agents.py
2
+ import requests
3
+ import json
4
+ from flask import jsonify
5
+ import os
6
+ key = os.getenv("API-KEY")
7
+ api_key = key
8
+ import re # Import the regular expressions library
9
+
10
+ def generate_research_questions_and_purpose_with_gpt(objective, num_questions):
11
+
12
+ headers = {
13
+ "Authorization": f"Bearer {api_key}",
14
+ "Content-Type": "application/json"
15
+ }
16
+ # Construct the prompt dynamically
17
+ prompt_content = f"You are a helpful assistant capable of generating research questions along with their purposes for a systematic literature review.\n"
18
+ prompt_content = f"Given the research objective: '{objective}', generate {num_questions} distinct research questions, each followed by its specific purpose. 'To examine', or 'To investigate'."
19
+ data = {
20
+ "model": "gpt-3.5-turbo",
21
+ "messages": [
22
+ {"role": "system", "content": "You are a helpful assistant capable of generating research questions along with their purposes for a systematic literature review."},
23
+ {"role": "user", "content": prompt_content}
24
+ ],
25
+ "temperature": 0.7
26
+ }
27
+ response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, data=json.dumps(data))
28
+ if response.status_code == 200:
29
+ result = response.json()
30
+ messages = result['choices'][0]['message']['content']
31
+ lines = [line for line in messages.strip().split('\n') if line]
32
+
33
+ question_purpose_objects = []
34
+ for i in range(0, len(lines), 2):
35
+ # Using regex to dynamically remove "Research question X:" where X is any number
36
+ question = re.sub(r"^Research question( \d+)?: ", "", lines[i], flags=re.IGNORECASE)
37
+ purpose = lines[i+1] if i+1 < len(lines) else "Purpose not provided"
38
+ # Optionally, remove the prefix from purpose if needed
39
+ # purpose = purpose.replace("Purpose: ", "")
40
+ question_purpose_objects.append({"question": question, "purpose": purpose})
41
+
42
+ if num_questions == 1 and question_purpose_objects:
43
+
44
+ return {"research_questions": question_purpose_objects}
45
+ else:
46
+ return {"research_questions": question_purpose_objects}
47
+ else:
48
+ print(f"Error: {response.status_code}")
49
+ print(response.text)
50
+ return []
51
+
52
+
53
+
54
+ def generate_summary_conclusion(papers_info):
55
+ headers = {"Authorization": f"Bearer {api_key}", "Content-Type": "application/json"}
56
+
57
+ prompt_parts = ["Summarize the conclusions of the following papers:"]
58
+ for paper in papers_info:
59
+ title = paper.get("title")
60
+ author = paper.get("creator", "An author")
61
+ year = paper.get("year", "A year")
62
+ prompt_parts.append(f"- '{title}' by {author} ({year})")
63
+ prompt = " ".join(prompt_parts)
64
+
65
+ data = {
66
+ "model": "gpt-3.5-turbo",
67
+ "messages": [
68
+ {"role": "system", "content": "You are a helpful assistant."},
69
+ {"role": "user", "content": prompt},
70
+ ],
71
+ }
72
+
73
+ response = requests.post(
74
+ "https://api.openai.com/v1/chat/completions",
75
+ headers=headers,
76
+ data=json.dumps(data),
77
+ )
78
+
79
+ if response.status_code == 200:
80
+ result = response.json()
81
+ content = result["choices"][0]["message"]["content"]
82
+ summary_conclusion = content.strip()
83
+ else:
84
+ return jsonify({"error": "Failed to generate a summary conclusion."}), 500
85
+
86
+ return summary_conclusion
87
+
88
+
89
+ def generate_abstract_with_openai(prompt):
90
+ """Generates a summary abstract using OpenAI's GPT model based on the provided prompt."""
91
+ # Fetching the API key from environment variables for better security practice
92
+
93
+ headers = {
94
+ "Authorization": f"Bearer {api_key}", # Using the API key from environment variables
95
+ "Content-Type": "application/json"
96
+ }
97
+ data = {
98
+ "model": "gpt-3.5-turbo",
99
+ "messages": [
100
+ {"role": "system", "content": "You are a helpful assistant."},
101
+ {"role": "user", "content": prompt}
102
+ ]
103
+ }
104
+
105
+ response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, data=json.dumps(data))
106
+ if response.status_code == 200:
107
+ result = response.json()
108
+ content = result['choices'][0]['message']['content']
109
+ return content.strip()
110
+ else:
111
+ raise Exception("Failed to generate a summary abstract from OpenAI.")
112
+
113
+
114
+ def generate_introduction_summary_with_openai(prompt):
115
+ headers = {
116
+ "Authorization": f"Bearer {api_key}",
117
+ "Content-Type": "application/json"
118
+ }
119
+ data = {
120
+ "model": "gpt-3.5-turbo",
121
+ "messages": [
122
+ {"role": "system", "content": "You are a helpful assistant."},
123
+ {"role": "user", "content": prompt}
124
+ ]
125
+ }
126
+ response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, data=json.dumps(data))
127
+ if response.status_code == 200:
128
+ result = response.json()
129
+ content = result['choices'][0]['message']['content']
130
+ return content.strip()
131
+ else:
132
+ raise Exception("Failed to generate the introduction summary from OpenAI.")
agents2.py ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # agents2.py
2
+ import requests
3
+ import json
4
+ import os
5
+ key = os.getenv("API-KEY")
6
+ api_key = key
7
+ def extract_search_string(content):
8
+ possible_operators = ['AND', 'OR', 'NOT', '"']
9
+ for line in content.split('\n'):
10
+ if any(op in line for op in possible_operators):
11
+ return line
12
+ return content
13
+ def generate_search_string_with_gpt(objective, research_questions):
14
+
15
+ headers = {
16
+ "Authorization": f"Bearer {api_key}",
17
+ "Content-Type": "application/json"
18
+ }
19
+ # Removed the explicit instruction for logical operators
20
+ combined_prompt = f"Given the research objective: '{objective}', and the following research questions: {', '.join(research_questions)}, generate two concise search string for identifying relevant literature for literature review.Do not include OR. Use AND if needed."
21
+ data = {
22
+ "model": "gpt-3.5-turbo",
23
+ "messages": [
24
+ {"role": "system", "content": "You are a helpful assistant."},
25
+ {"role": "user", "content": combined_prompt}
26
+ ],
27
+ "temperature": 0.7
28
+ }
29
+ response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, data=json.dumps(data))
30
+ if response.status_code == 200:
31
+ result = response.json()
32
+ content = result['choices'][0]['message']['content']
33
+ search_string = extract_search_string(content)
34
+ return search_string.strip()
35
+ else:
36
+ print(f"Error: {response.status_code}")
37
+ print(response.text)
38
+ return "An error occurred while generating the search string."
agents3.py ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from scholarly import scholarly
2
+ import csv
3
+ from scholarly import ProxyGenerator, scholarly
4
+ import os
5
+ import requests
6
+
7
+ api_key = os.getenv('ELSEVIER_API_KEY')
8
+ # Initialize a global variable to track if the proxy setup has been done
9
+ proxy_setup_done = False
10
+
11
+ def setup_proxy():
12
+ global proxy_setup_done
13
+ # Check if the proxy setup has already been done
14
+ if not proxy_setup_done:
15
+ # Set up a ProxyGenerator object to use free proxies
16
+ pg = ProxyGenerator()
17
+ pg.FreeProxies()
18
+ scholarly.use_proxy(pg)
19
+
20
+ # Mark the setup as done
21
+ proxy_setup_done = True
22
+ print("Proxy setup completed.")
23
+ else:
24
+ print("Proxy setup was already completed earlier in this session.")
25
+
26
+ # Example usage
27
+ setup_proxy()
28
+
29
+
30
+
31
+
32
+ def fetch_papers(search_string, min_results=8):
33
+ search_query = scholarly.search_pubs(search_string)
34
+ papers_details = []
35
+ for _ in range(min_results):
36
+ try:
37
+ paper = next(search_query)
38
+ paper_details = {
39
+ 'title': paper['bib']['title'],
40
+ 'author': paper['bib'].get('author'),
41
+ 'pub_year': paper['bib'].get('pub_year'),
42
+ 'publication_url': paper.get('pub_url', 'Not Available'),
43
+ 'journal_name': paper['bib'].get('journal', 'Not Available'),
44
+ # Attempting to extract DOI, publication date, and making an educated guess on paper type
45
+ 'doi': paper.get('doi', 'Not Available'),
46
+ 'publication_date': paper['bib'].get('pub_year', 'Not Available'), # Simplified to publication year
47
+ 'paper_type': 'Journal' if 'journal' in paper['bib'] else 'Conference' if 'conference' in paper['bib'] else 'Primary Study' # Simplistic categorization
48
+ }
49
+ papers_details.append(paper_details)
50
+ except StopIteration:
51
+ break # Exit if there are no more results
52
+ return papers_details
53
+
54
+
55
+ def save_papers_to_csv(papers_details, filename='papers.csv'):
56
+ fieldnames = ['title', 'author', 'pub_year', 'publication_url', 'journal_name', 'doi', 'publication_date', 'paper_type']
57
+ with open(filename, mode='w', newline='', encoding='utf-8') as file:
58
+ writer = csv.DictWriter(file, fieldnames=fieldnames)
59
+ writer.writeheader()
60
+ for paper in papers_details:
61
+ writer.writerow(paper)
62
+
63
+
64
+
65
+
66
+ def search_elsevier(search_string, start_year, end_year, limit):
67
+
68
+ url = "https://api.elsevier.com/content/search/scopus"
69
+ headers = {
70
+ "X-ELS-APIKey": api_key,
71
+ "Accept": "application/json"
72
+ }
73
+
74
+ query = f"TITLE-ABS-KEY({search_string}) AND PUBYEAR = {start_year}"
75
+ params = {
76
+ "query": query,
77
+ "count": limit,
78
+ }
79
+
80
+
81
+ response = requests.get(url, headers=headers, params=params)
82
+ if response.status_code == 200:
83
+ response_data = response.json()
84
+ papers = response_data.get('search-results', {}).get('entry', [])
85
+ parsed_papers = []
86
+ for paper in papers:
87
+ parsed_paper = {
88
+ "affiliation-country": next((affil.get("affiliation-country", "Not Available") for affil in paper.get("affiliation", [])), "Not Available"),
89
+ "affilname": next((affil.get("affilname", "Not Available") for affil in paper.get("affiliation", [])), "Not Available"),
90
+ "creator": paper.get("dc:creator", "Not Available"),
91
+ "identifier": paper.get("dc:identifier", "Not Available"),
92
+ "title": paper.get("dc:title", "Not Available"),
93
+ "link": next((link["@href"] for link in paper.get("link", []) if link["@ref"] == "scopus"), "Not Available"),
94
+ "year": paper.get("prism:coverDate", "Not Available").split("-")[0],
95
+ "openaccess": paper.get("openaccess", "0") == "1",
96
+ "publicationName": paper.get("prism:publicationName", "Not Available"),
97
+ "aggregationType": paper.get("prism:aggregationType", "Not Available"),
98
+ "volume": paper.get("prism:volume", "Not Available"),
99
+ "doi": paper.get("prism:doi", "Not Available")
100
+ }
101
+ parsed_papers.append(parsed_paper)
102
+ return parsed_papers
103
+ else:
104
+ print(f"Failed to fetch papers: {response.status_code} {response.text}")
105
+ return {"error": "Failed to fetch papers from Elsevier", "status_code": response.status_code, "message": response.text}
agents4.py ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # agent4.py
2
+ import requests
3
+ import os
4
+ api_key = os.getenv("API-KEY")
5
+ import re
6
+
7
+ def check_paper_relevance_and_keywords(title, search_string):
8
+ headers = {"Authorization": f"Bearer {api_key}", "Content-Type": "application/json"}
9
+ # Adjust the prompt to ask for relevance and keywords
10
+ prompt = (f"Determine if the paper titled '{title}' is relevant to the topic '{search_string}'. "
11
+ "and in return just informed paper is relevant or paper is not relevant, to the point.")
12
+
13
+ data = {
14
+ "model": "gpt-3.5-turbo",
15
+ "messages": [
16
+ {"role": "system", "content": "You are a knowledgeable assistant."},
17
+ {"role": "user", "content": prompt}
18
+ ]
19
+ }
20
+
21
+ response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=data)
22
+ if response.status_code == 200:
23
+ result = response.json()
24
+ response_text = result['choices'][0]['message']['content'].strip().lower()
25
+ print(response_text)
26
+ # Check for explicit confirmation of relevance
27
+ if "not relevant" in response_text:
28
+ return False
29
+ else:
30
+ # Assuming the model lists keywords after confirming relevance
31
+ # Extracting keywords from the response assuming they are listed after "key words:" phrase
32
+ return True
33
+ else:
34
+ print(f"Error: {response.status_code}, Detail: {response.text}")
35
+
36
+ return (False, [])
37
+
38
+ def filter_papers_with_gpt_turbo(search_string, papers):
39
+
40
+ filtered_papers = []
41
+
42
+ for paper in papers:
43
+ title = paper['title']
44
+ if check_paper_relevance_and_keywords(title, search_string):
45
+ filtered_papers.append(paper)
46
+
47
+ return filtered_papers
48
+
49
+
50
+ def is_response_relevant(response):
51
+ # Define a pattern that matches sentences indicating irrelevance
52
+ irrelevance_pattern = re.compile(r"does not appear to be directly relevant", re.IGNORECASE)
53
+
54
+ # Define a pattern that matches sentences indicating relevance
55
+ relevance_pattern = re.compile(r"topics related", re.IGNORECASE)
56
+
57
+ # Check for irrelevance
58
+ if irrelevance_pattern.search(response):
59
+ return False # Irrelevant based on the matched pattern
60
+
61
+ # Check for relevance
62
+ if relevance_pattern.search(response):
63
+ return True # Relevant based on the matched pattern
64
+
65
+ # If neither pattern is matched, you might decide based on other criteria or default assumption
66
+ return None # Or False/True based on your default assumption
67
+
68
+
69
+ def generate_response_gpt4_turbo(question, papers_info):
70
+ messages = [{
71
+ "role": "system",
72
+ "content": "You are a knowledgeable assistant who can answer research questions based on provided papers information."
73
+ }]
74
+
75
+ papers_context = "\n".join([f"- Title: '{paper['title']}', Author: {paper['creator']}, Year: {paper['year']}'." for paper in papers_info])
76
+ messages.append({
77
+ "role": "system",
78
+ "content": f"Research Question: {question}\n\nPapers Information:\n{papers_context}"
79
+ })
80
+
81
+ messages.append({
82
+ "role": "user",
83
+ "content": "Based on the provided papers information, please answer the research question and cite relevant references for cross-verification."
84
+ })
85
+
86
+ headers = {
87
+ "Authorization": f"Bearer {api_key}",
88
+ "Content-Type": "application/json",
89
+ }
90
+
91
+ data = {
92
+ "model": "gpt-4-turbo-preview",
93
+ "messages": messages,
94
+ "temperature": 0.7,
95
+ "max_tokens": 256
96
+ }
97
+
98
+ response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=data, timeout=800)
99
+
100
+ if response.status_code == 200:
101
+ result = response.json()
102
+ latest_response = result['choices'][0]['message']['content']
103
+ return latest_response
104
+ else:
105
+ print(f"Error: {response.status_code}")
106
+ print(response.text)
107
+ return "An error occurred while generating the response."
index.html ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!doctype html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8" />
5
+ <link rel="icon" type="image/svg+xml" href="/vite.svg" />
6
+ <meta name="viewport" content="width=device-width, initial-scale=1.0" />
7
+ <title>Vite + React</title>
8
+ </head>
9
+ <body>
10
+ <div id="root"></div>
11
+ <script type="module" src="/src/main.jsx"></script>
12
+ </body>
13
+ </html>
package.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "init",
3
+ "private": true,
4
+ "version": "0.0.0",
5
+ "type": "module",
6
+ "scripts": {
7
+ "dev": "vite",
8
+ "build": "vite build",
9
+ "lint": "eslint . --ext js,jsx --report-unused-disable-directives --max-warnings 0",
10
+ "preview": "vite preview"
11
+ },
12
+ "dependencies": {
13
+ "@ant-design/icons": "^5.3.1",
14
+ "antd": "^5.15.0",
15
+ "react": "^18.2.0",
16
+ "react-dom": "^18.2.0"
17
+ },
18
+ "devDependencies": {
19
+ "@types/react": "^18.2.56",
20
+ "@types/react-dom": "^18.2.19",
21
+ "@vitejs/plugin-react": "^4.2.1",
22
+ "eslint": "^8.56.0",
23
+ "eslint-plugin-react": "^7.33.2",
24
+ "eslint-plugin-react-hooks": "^4.6.0",
25
+ "eslint-plugin-react-refresh": "^0.4.5",
26
+ "vite": "^5.1.4"
27
+ }
28
+ }
requirements.txt ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ alabaster==0.7.16
2
+ anyio==4.3.0
3
+ arrow==1.3.0
4
+ attrs==23.2.0
5
+ Babel==2.14.0
6
+ beautifulsoup4==4.12.3
7
+ bibtexparser==1.4.1
8
+ blinker==1.7.0
9
+ certifi==2024.2.2
10
+ charset-normalizer==3.3.2
11
+ click==8.1.7
12
+ Deprecated==1.2.14
13
+ docutils==0.20.1
14
+ fake-useragent==1.5.0
15
+ Flask==3.0.2
16
+ Flask-Cors==4.0.0
17
+ free-proxy==1.1.1
18
+ h11==0.14.0
19
+ httpcore==1.0.4
20
+ httpx==0.27.0
21
+ idna==3.6
22
+ imagesize==1.4.1
23
+ itsdangerous==2.1.2
24
+ Jinja2==3.1.3
25
+ lxml==5.1.0
26
+ MarkupSafe==2.1.5
27
+ outcome==1.3.0.post0
28
+ packaging==24.0
29
+ Pygments==2.17.2
30
+ pyparsing==3.1.2
31
+ PySocks==1.7.1
32
+ python-dateutil==2.9.0.post0
33
+ python-dotenv==1.0.1
34
+ requests==2.31.0
35
+ scholarly==1.7.11
36
+ selenium==4.18.1
37
+ six==1.16.0
38
+ sniffio==1.3.1
39
+ snowballstemmer==2.2.0
40
+ sortedcontainers==2.4.0
41
+ soupsieve==2.5
42
+ Sphinx==7.2.6
43
+ sphinx-rtd-theme==2.0.0
44
+ sphinxcontrib-applehelp==1.0.8
45
+ sphinxcontrib-devhelp==1.0.6
46
+ sphinxcontrib-htmlhelp==2.0.5
47
+ sphinxcontrib-jquery==4.1
48
+ sphinxcontrib-jsmath==1.0.1
49
+ sphinxcontrib-qthelp==1.0.7
50
+ sphinxcontrib-serializinghtml==1.1.10
51
+ trio==0.24.0
52
+ trio-websocket==0.11.1
53
+ types-python-dateutil==2.8.19.20240106
54
+ typing_extensions==4.10.0
55
+ urllib3==2.2.1
56
+ Werkzeug==3.0.1
57
+ wrapt==1.16.0
58
+ wsproto==1.2.0
server.py ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dotenv import load_dotenv
2
+ import os
3
+ import tempfile
4
+ from flask import Flask, render_template,send_file, send_from_directory, request, jsonify
5
+ import datetime
6
+ from agents import generate_research_questions_and_purpose_with_gpt, generate_abstract_with_openai, generate_summary_conclusion, generate_introduction_summary_with_openai
7
+ import json
8
+ from agents2 import generate_search_string_with_gpt
9
+ from agents3 import fetch_papers, save_papers_to_csv, search_elsevier
10
+ from agents4 import filter_papers_with_gpt_turbo, generate_response_gpt4_turbo
11
+ from flask_cors import CORS
12
+ import requests
13
+ from datetime import datetime
14
+
15
+ load_dotenv()
16
+ # x = datetime.datetime.now()
17
+
18
+ key = os.getenv("ELSEVIER_API_KEY")
19
+
20
+ app = Flask(__name__, static_folder='dist')
21
+ CORS(app)
22
+ @app.route('/api/generate_search_string', methods=['POST'])
23
+ def generate_search_string_route():
24
+ data = request.json
25
+ objective = data.get('objective')
26
+ research_questions = data.get('research_questions', []) # Default to an empty list if not provided
27
+
28
+ if not objective or not research_questions:
29
+ return jsonify({"error": "Objective and research questions are required."}), 400
30
+
31
+ search_string = generate_search_string_with_gpt(objective, research_questions)
32
+ return jsonify({"search_string": search_string})
33
+ @app.route('/api/generate_research_questions_and_purpose', methods=['POST'])
34
+
35
+ def generate_research_questions_and_purpose():
36
+ print("request:", request.method)
37
+ data = request.json
38
+ objective = data.get('objective')
39
+ num_questions = int(data.get('num_questions', 1)) # Ensure num_questions is treated as an integer
40
+
41
+ # Validate input
42
+ if not objective:
43
+ return jsonify({"error": "Objective is required"}), 400
44
+ if num_questions < 1:
45
+ return jsonify({"error": "Number of questions must be at least 1"}), 400
46
+
47
+ questions_and_purposes = generate_research_questions_and_purpose_with_gpt(objective, num_questions)
48
+ print(questions_and_purposes)
49
+ return jsonify({"research_questions": questions_and_purposes})
50
+
51
+
52
+ # Agent 4
53
+
54
+ @app.route('/api/filter_papers', methods=['POST'])
55
+ def filter_papers_route():
56
+ data = request.json
57
+ search_string = data.get('search_string', '')
58
+ papers = data.get('papers', []) # Expecting only titles in papers
59
+
60
+ filtered_papers = filter_papers_with_gpt_turbo(search_string, papers)
61
+ return jsonify({"filtered_papers": filtered_papers})
62
+
63
+
64
+ @app.route('/api/answer_question', methods=['POST'])
65
+ def answer_question():
66
+ data = request.json
67
+ questions = data.get('questions') # This should now be a list of questions
68
+ papers_info = data.get('papers_info', [])
69
+
70
+ if not questions or not papers_info:
71
+ return jsonify({"error": "Both questions and papers information are required."}), 400
72
+
73
+ answers = []
74
+ for question in questions:
75
+ answer = generate_response_gpt4_turbo(question, papers_info)
76
+ answers.append({"question": question, "answer": answer})
77
+
78
+ return jsonify({"answers": answers})
79
+
80
+
81
+ @app.route('/api/generate-summary-abstract', methods=['POST'])
82
+ def generate_summary_abstract():
83
+ try:
84
+ data = request.json
85
+
86
+ research_questions = data.get('research_questions', 'No research questions provided.')
87
+ objective = data.get('objective', 'No objective provided.')
88
+ search_string = data.get('search_string', 'No search string provided.')
89
+
90
+ # Constructing the prompt for AI abstract generation
91
+ prompt = f"Based on the research questions '{research_questions}', the objective '{objective}', and the search string '{search_string}', generate a comprehensive abstract."
92
+
93
+ # Generate the abstract using OpenAI's GPT model
94
+ summary_abstract = generate_abstract_with_openai(prompt)
95
+
96
+ return jsonify({"summary_abstract": summary_abstract})
97
+ except Exception as e:
98
+ return jsonify({"error": str(e)}), 500
99
+
100
+
101
+ @app.route("/api/generate-summary-conclusion", methods=["POST"])
102
+ def generate_summary_conclusion_route():
103
+ data = request.json
104
+ papers_info = data.get("papers_info", [])
105
+ try:
106
+ summary_conclusion = generate_summary_conclusion(papers_info)
107
+ return jsonify({"summary_conclusion": summary_conclusion})
108
+ except Exception as e:
109
+ return jsonify({"error": str(e)}), 500
110
+
111
+ @app.route('/api/generate-introduction-summary', methods=['POST'])
112
+ def generate_introduction_summary():
113
+ try:
114
+ data = request.json
115
+ total_papers = len(data.get("all_papers", []))
116
+ filtered_papers_count = len(data.get("filtered_papers", []))
117
+ research_questions = data.get("research_questions", [])
118
+ objective = data.get("objective", "")
119
+ search_string = data.get("search_string", "")
120
+ answers = data.get("answers", [])
121
+
122
+ # Constructing the introduction based on the provided data
123
+ prompt_intro = f"This document synthesizes findings from {total_papers} papers related to \"{search_string}\". Specifically, {filtered_papers_count} papers were thoroughly examined. The primary objective is {objective}."
124
+
125
+ prompt_questions = "\n\nResearch Questions:\n" + "\n".join([f"- {q}" for q in research_questions])
126
+
127
+ prompt_answers = "\n\nSummary of Findings:\n" + "\n".join([f"- {ans['question']}: {ans['answer'][:250]}..." for ans in answers]) # Brief summary of answers
128
+
129
+ prompt = prompt_intro + prompt_questions + prompt_answers + "\n\nGenerate a coherent introduction and summary based on this compilation."
130
+
131
+ # Generating the introduction summary using OpenAI's GPT model
132
+ introduction_summary = generate_introduction_summary_with_openai(prompt)
133
+
134
+ return jsonify({"introduction_summary": introduction_summary})
135
+ except Exception as e:
136
+ return jsonify({"error": str(e)}), 500
137
+
138
+
139
+ @app.route("/api/generate-summary-all", methods=["POST"])
140
+ def generate_summary_all_route():
141
+ data = request.json
142
+ abstract_summary = data.get("abstract_summary", "")
143
+ intro_summary = data.get("intro_summary", "") # Corrected key to "intro_summary"
144
+ conclusion_summary = data.get("conclusion_summary", "") # Corrected key to "conclusion_summary"
145
+
146
+ try:
147
+ # Assuming you have a LaTeX template named 'latex_template.tex' in the 'templates' folder
148
+ print("inside")
149
+ latex_content = render_template(
150
+ "latex_template.tex",
151
+ abstract=abstract_summary,
152
+ introduction=intro_summary,
153
+ conclusion=conclusion_summary,
154
+ )
155
+
156
+ # Save the LaTeX content to a file in the same directory as this script
157
+ current_time = datetime.now().strftime('%Y%m%d%H%M%S')
158
+ milliseconds = datetime.now().microsecond // 1000
159
+ file_path = os.path.join(os.path.dirname(__file__), f"{current_time}_{milliseconds}summary.tex")
160
+ print(file_path)
161
+ with open(file_path, "w", encoding="utf-8") as file:
162
+ file.write(latex_content)
163
+ with tempfile.NamedTemporaryFile(mode='w+', suffix='.tex', delete=False, encoding='utf-8') as temp_file:
164
+ temp_file.write(latex_content)
165
+ temp_file_path = temp_file.name
166
+ return send_file(temp_file_path, as_attachment=True, download_name='paper_summary.tex')
167
+ # return jsonify({"latex_file_path": file_path})
168
+ except Exception as e:
169
+ return jsonify({"error": str(e)}), 500
170
+
171
+ # # Route for serving static files (like manifest.json)
172
+ @app.route('/')
173
+ def index():
174
+ return send_from_directory(app.static_folder, 'index.html')
175
+ @app.route('/<path:path>')
176
+ def serve(path):
177
+ print("filename:", app.static_folder+ "/" + path)
178
+ if path != "" and os.path.exists(app.static_folder+ "/" + path):
179
+ return send_from_directory(app.static_folder, path)
180
+ else:
181
+ return send_from_directory(app.static_folder, 'index.html')
182
+ # return send_from_directory('templates/static/', filename)
183
+ # # Route for rendering the React app
184
+ # @app.route('/')
185
+ # def index():
186
+ # print("calling")
187
+ # return render_template('index.html')
188
+
189
+
190
+
191
+
192
+
193
+ @app.route('/api/search_papers', methods=['POST'])
194
+ def search_papers():
195
+ data = request.json
196
+ search_string = data.get('search_string', '')
197
+ start_year = data.get('start_year', '')
198
+ end_year = data.get('end_year', '')
199
+ limit = data.get('limit', 4) # Default limit to 10 papers if not specified
200
+
201
+ if not search_string or not start_year:
202
+ return jsonify({'error': 'Search string and start year are required.'}), 400
203
+
204
+ results = search_elsevier(search_string, start_year, end_year, limit)
205
+ return jsonify(results)
206
+
207
+ # Running app
208
+ if __name__ == '__main__':
209
+ app.run(debug=True)
vite.config.js ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import { defineConfig } from 'vite'
2
+ import react from '@vitejs/plugin-react'
3
+
4
+ // https://vitejs.dev/config/
5
+ export default defineConfig({
6
+ plugins: [react()],
7
+ server:{
8
+ proxy: {
9
+ '/api': {
10
+ target: 'http://127.0.0.1:5000',
11
+ changeOrigin: true,
12
+ secure: false,
13
+ // rewrite: (path) => {console.log("path:", path)
14
+ // return path.replace(/^\/api/, "")}
15
+ }
16
+ }
17
+ }
18
+ })