File size: 15,770 Bytes
9183c57
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
import os
import yaml
import json
import re
import streamlit as st
from langchain.prompts import PromptTemplate
from langchain.schema import HumanMessage
from langchain.chat_models import ChatOpenAI

config_path = os.path.join(os.path.dirname(__file__), 'config', 'config.yaml')
with open(config_path, 'r') as file:
    config = yaml.safe_load(file)
model4_name = config["model4_name"]
model3_name = config["model3_name"]
api_key = config["openai_api_key"]

def decide_encode_type(attributes, data_frame_head, model_type = 4, user_api_key = None):
    """
    Decides the encoding type for given attributes using a language model via the OpenAI API.

    Parameters:
    - attributes (list): A list of attributes for which to decide the encoding type.
    - data_frame_head (DataFrame): The head of the DataFrame containing the attributes. This parameter is expected to be a representation of the DataFrame (e.g., a string or a small subset of the actual DataFrame) that gives an overview of the data.
    - model_type (int, optional): Specifies the model to use. The default model_type=4 corresponds to a predefined model named `model4_name`. Another option is model_type=3, which corresponds to `model3_name`.
    - user_api_key (str, optional): The user's OpenAI API key. If not provided, a default API key `api_key` is used.

    Returns:
    - A JSON object containing the recommended encoding types for the given attributes. Please refer to prompt templates in config.py for details.

    Raises:
    - Exception: If there is an issue accessing the OpenAI API, such as an invalid API key or a network connection error, the function will raise an exception with a message indicating the problem.
    """
    try:
        model_name = model4_name if model_type == 4 else model3_name
        user_api_key = api_key if user_api_key is None else user_api_key
        llm = ChatOpenAI(model_name=model_name, openai_api_key=user_api_key, temperature=0)
        
        template = config["numeric_attribute_template"]
        prompt_template = PromptTemplate(input_variables=["attributes", "data_frame_head"], template=template)
        summary_prompt = prompt_template.format(attributes=attributes, data_frame_head=data_frame_head)
        
        llm_answer = llm([HumanMessage(content=summary_prompt)])
        if '```json' in llm_answer.content:
            match = re.search(r'```json\n(.*?)```', llm_answer.content, re.DOTALL)
            if match: json_str = match.group(1)
        else: json_str = llm_answer.content
        return json.loads(json_str)
    except Exception as e:
        st.error("Cannot access the OpenAI API. Please check your API key or network connection.")
        st.stop()

def decide_fill_null(attributes, types_info, description_info, model_type = 4, user_api_key = None):
    """
    Decides the best encoding type for given attributes using an AI model via OpenAI API.

    Parameters:
    - attributes (list): List of attribute names to consider for encoding.
    - data_frame_head (DataFrame or str): The head of the DataFrame or a string representation, providing context for the encoding decision.
    - model_type (int, optional): The model to use, where 4 is the default. Can be customized to use a different model.
    - user_api_key (str, optional): The user's OpenAI API key. If None, a default key is used.

    Returns:
    - dict: A JSON object with recommended encoding types for the attributes. Please refer to prompt templates in config.py for details.

    Raises:
    - Exception: If there is an issue accessing the OpenAI API, such as an invalid API key or a network connection error, the function will raise an exception with a message indicating the problem.
    """
    try:
        model_name = model4_name if model_type == 4 else model3_name
        user_api_key = api_key if user_api_key is None else user_api_key
        llm = ChatOpenAI(model_name=model_name, openai_api_key=user_api_key, temperature=0)
        
        template = config["null_attribute_template"]
        prompt_template = PromptTemplate(input_variables=["attributes", "types_info", "description_info"], template=template)
        summary_prompt = prompt_template.format(attributes=attributes, types_info=types_info, description_info=description_info)
        
        llm_answer = llm([HumanMessage(content=summary_prompt)])
        if '```json' in llm_answer.content:
            match = re.search(r'```json\n(.*?)```', llm_answer.content, re.DOTALL)
            if match: json_str = match.group(1)
        else: json_str = llm_answer.content
        return json.loads(json_str)
    except Exception as e:
        st.error("Cannot access the OpenAI API. Please check your API key or network connection.")
        st.stop()

def decide_model(shape_info, head_info, nunique_info, description_info, model_type = 4, user_api_key = None):
    """
    Decides the most suitable machine learning model based on dataset characteristics.

    Parameters:
    - shape_info (dict): Information about the shape of the dataset.
    - head_info (str or DataFrame): The head of the dataset or its string representation.
    - nunique_info (dict): Information about the uniqueness of dataset attributes.
    - description_info (str): Descriptive information about the dataset.
    - model_type (int, optional): Specifies which model to consult for decision-making.
    - user_api_key (str, optional): OpenAI API key for making requests.

    Returns:
    - dict: A JSON object containing the recommended model and configuration. Please refer to prompt templates in config.py for details.

    Raises:
    - Exception: If there is an issue accessing the OpenAI API, such as an invalid API key or a network connection error, the function will raise an exception with a message indicating the problem.
    """
    try:
        model_name = model4_name if model_type == 4 else model3_name
        user_api_key = api_key if user_api_key is None else user_api_key
        llm = ChatOpenAI(model_name=model_name, openai_api_key=user_api_key, temperature=0)

        template = config["decide_model_template"]
        prompt_template = PromptTemplate(input_variables=["shape_info", "head_info", "nunique_info", "description_info"], template=template)
        summary_prompt = prompt_template.format(shape_info=shape_info, head_info=head_info, nunique_info=nunique_info, description_info=description_info)

        llm_answer = llm([HumanMessage(content=summary_prompt)])
        if '```json' in llm_answer.content:
            match = re.search(r'```json\n(.*?)```', llm_answer.content, re.DOTALL)
            if match: json_str = match.group(1)
        else: json_str = llm_answer.content
        return json.loads(json_str)
    except Exception as e:
        st.error("Cannot access the OpenAI API. Please check your API key or network connection.")
        st.stop()

def decide_cluster_model(shape_info, description_info, cluster_info, model_type = 4, user_api_key = None):
    """
    Determines the appropriate clustering model based on dataset characteristics.

    Parameters:
    - shape_info: Information about the dataset shape.
    - description_info: Descriptive statistics or information about the dataset.
    - cluster_info: Additional information relevant to clustering.
    - model_type (int, optional): The model type to use for decision making (default 4).
    - user_api_key (str, optional): The user's API key for OpenAI.

    Returns:
    - A JSON object with the recommended clustering model and parameters. Please refer to prompt templates in config.py for details.

    Raises:
    - Exception: If unable to access the OpenAI API or another error occurs.
    """
    try:
        model_name = model4_name if model_type == 4 else model3_name
        user_api_key = api_key if user_api_key is None else user_api_key
        llm = ChatOpenAI(model_name=model_name, openai_api_key=user_api_key, temperature=0)

        template = config["decide_clustering_model_template"]
        prompt_template = PromptTemplate(input_variables=["shape_info", "description_info", "cluster_info"], template=template)
        summary_prompt = prompt_template.format(shape_info=shape_info, description_info=description_info, cluster_info=cluster_info)

        llm_answer = llm([HumanMessage(content=summary_prompt)])
        if '```json' in llm_answer.content:
            match = re.search(r'```json\n(.*?)```', llm_answer.content, re.DOTALL)
            if match: json_str = match.group(1)
        else: json_str = llm_answer.content
        return json.loads(json_str)
    except Exception as e:
        st.error("Cannot access the OpenAI API. Please check your API key or network connection.")
        st.stop()

def decide_regression_model(shape_info, description_info, Y_name, model_type = 4, user_api_key = None):
    """
    Determines the appropriate regression model based on dataset characteristics and the target variable.

    Parameters:
    - shape_info: Information about the dataset shape.
    - description_info: Descriptive statistics or information about the dataset.
    - Y_name: The name of the target variable.
    - model_type (int, optional): The model type to use for decision making (default 4).
    - user_api_key (str, optional): The user's API key for OpenAI.

    Returns:
    - A JSON object with the recommended regression model and parameters. Please refer to prompt templates in config.py for details.

    Raises:
    - Exception: If unable to access the OpenAI API or another error occurs.
    """
    try:
        model_name = model4_name if model_type == 4 else model3_name
        user_api_key = api_key if user_api_key is None else user_api_key
        llm = ChatOpenAI(model_name=model_name, openai_api_key=user_api_key, temperature=0)

        template = config["decide_regression_model_template"]
        prompt_template = PromptTemplate(input_variables=["shape_info", "description_info", "Y_name"], template=template)
        summary_prompt = prompt_template.format(shape_info=shape_info, description_info=description_info, Y_name=Y_name)

        llm_answer = llm([HumanMessage(content=summary_prompt)])
        if '```json' in llm_answer.content:
            match = re.search(r'```json\n(.*?)```', llm_answer.content, re.DOTALL)
            if match: json_str = match.group(1)
        else: json_str = llm_answer.content
        return json.loads(json_str)
    except Exception as e:
        st.error("Cannot access the OpenAI API. Please check your API key or network connection.")
        st.stop()

def decide_target_attribute(attributes, types_info, head_info, model_type = 4, user_api_key = None):
    """
    Determines the target attribute for modeling based on dataset attributes and characteristics.

    Parameters:
    - attributes: A list of dataset attributes.
    - types_info: Information about the data types of the attributes.
    - head_info: A snapshot of the dataset's first few rows.
    - model_type (int, optional): The model type to use for decision making (default 4).
    - user_api_key (str, optional): The user's API key for OpenAI.

    Returns:
    - The name of the recommended target attribute. Please refer to prompt templates in config.py for details.

    Raises:
    - Exception: If unable to access the OpenAI API or another error occurs.
    """
    try:
        model_name = model4_name if model_type == 4 else model3_name
        user_api_key = api_key if user_api_key is None else user_api_key
        llm = ChatOpenAI(model_name=model_name, openai_api_key=user_api_key, temperature=0)

        template = config["decide_target_attribute_template"]
        prompt_template = PromptTemplate(input_variables=["attributes", "types_info", "head_info"], template=template)
        summary_prompt = prompt_template.format(attributes=attributes, types_info=types_info, head_info=head_info)

        llm_answer = llm([HumanMessage(content=summary_prompt)])
        if '```json' in llm_answer.content:
            match = re.search(r'```json\n(.*?)```', llm_answer.content, re.DOTALL)
            if match: json_str = match.group(1)
        else: json_str = llm_answer.content
        return json.loads(json_str)["target"]
    except Exception as e:
        st.error("Cannot access the OpenAI API. Please check your API key or network connection.")
        st.stop()

def decide_test_ratio(shape_info, model_type = 4, user_api_key = None):
    """
    Determines the appropriate train-test split ratio based on dataset characteristics.

    Parameters:
    - shape_info: Information about the dataset shape.
    - model_type (int, optional): The model type to use for decision making (default 4).
    - user_api_key (str, optional): The user's API key for OpenAI.

    Returns:
    - The recommended train-test split ratio as a float. Please refer to prompt templates in config.py for details.

    Raises:
    - Exception: If unable to access the OpenAI API or another error occurs.
    """
    try:
        model_name = model4_name if model_type == 4 else model3_name
        user_api_key = api_key if user_api_key is None else user_api_key
        llm = ChatOpenAI(model_name=model_name, openai_api_key=user_api_key, temperature=0)

        template = config["decide_test_ratio_template"]
        prompt_template = PromptTemplate(input_variables=["shape_info"], template=template)
        summary_prompt = prompt_template.format(shape_info=shape_info)

        llm_answer = llm([HumanMessage(content=summary_prompt)])
        if '```json' in llm_answer.content:
            match = re.search(r'```json\n(.*?)```', llm_answer.content, re.DOTALL)
            if match: json_str = match.group(1)
        else: json_str = llm_answer.content
        return json.loads(json_str)["test_ratio"]
    except Exception as e:
        st.error("Cannot access the OpenAI API. Please check your API key or network connection.")
        st.stop()

def decide_balance(shape_info, description_info, balance_info, model_type = 4, user_api_key = None):
    """
    Determines the appropriate method to balance the dataset based on its characteristics.

    Parameters:
    - shape_info: Information about the dataset shape.
    - description_info: Descriptive statistics or information about the dataset.
    - balance_info: Additional information relevant to dataset balancing.
    - model_type (int, optional): The model type to use for decision making (default 4).
    - user_api_key (str, optional): The user's API key for OpenAI.

    Returns:
    - The recommended method to balance the dataset. Please refer to prompt templates in config.py for details.

    Raises:
    - Exception: If unable to access the OpenAI API or another error occurs.
    """
    try:
        model_name = model4_name if model_type == 4 else model3_name
        user_api_key = api_key if user_api_key is None else user_api_key
        llm = ChatOpenAI(model_name=model_name, openai_api_key=user_api_key, temperature=0)

        template = config["decide_balance_template"]
        prompt_template = PromptTemplate(input_variables=["shape_info", "description_info", "balance_info"], template=template)
        summary_prompt = prompt_template.format(shape_info=shape_info, description_info=description_info, balance_info=balance_info)

        llm_answer = llm([HumanMessage(content=summary_prompt)])
        if '```json' in llm_answer.content:
            match = re.search(r'```json\n(.*?)```', llm_answer.content, re.DOTALL)
            if match: json_str = match.group(1)
        else: json_str = llm_answer.content
        return json.loads(json_str)["method"]
    except Exception as e:
        st.error("Cannot access the OpenAI API. Please check your API key or network connection.")
        st.stop()